00:00:00.000 Started by upstream project "autotest-per-patch" build number 132705 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.040 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.041 The recommended git tool is: git 00:00:00.041 using credential 00000000-0000-0000-0000-000000000002 00:00:00.043 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.058 Fetching changes from the remote Git repository 00:00:00.063 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.084 Using shallow fetch with depth 1 00:00:00.084 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.084 > git --version # timeout=10 00:00:00.116 > git --version # 'git version 2.39.2' 00:00:00.116 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.157 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.157 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.888 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.898 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.909 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.909 > git config core.sparsecheckout # timeout=10 00:00:03.921 > git read-tree -mu HEAD # timeout=10 00:00:03.934 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.952 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.952 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.079 [Pipeline] Start of Pipeline 00:00:04.091 [Pipeline] library 00:00:04.092 Loading library shm_lib@master 00:00:04.092 Library shm_lib@master is cached. Copying from home. 00:00:04.105 [Pipeline] node 00:00:04.115 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:04.116 [Pipeline] { 00:00:04.124 [Pipeline] catchError 00:00:04.126 [Pipeline] { 00:00:04.134 [Pipeline] wrap 00:00:04.140 [Pipeline] { 00:00:04.146 [Pipeline] stage 00:00:04.147 [Pipeline] { (Prologue) 00:00:04.159 [Pipeline] echo 00:00:04.160 Node: VM-host-WFP7 00:00:04.164 [Pipeline] cleanWs 00:00:04.172 [WS-CLEANUP] Deleting project workspace... 00:00:04.172 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.178 [WS-CLEANUP] done 00:00:04.380 [Pipeline] setCustomBuildProperty 00:00:04.475 [Pipeline] httpRequest 00:00:05.085 [Pipeline] echo 00:00:05.086 Sorcerer 10.211.164.20 is alive 00:00:05.096 [Pipeline] retry 00:00:05.098 [Pipeline] { 00:00:05.110 [Pipeline] httpRequest 00:00:05.114 HttpMethod: GET 00:00:05.115 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.115 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.117 Response Code: HTTP/1.1 200 OK 00:00:05.117 Success: Status code 200 is in the accepted range: 200,404 00:00:05.118 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.566 [Pipeline] } 00:00:05.581 [Pipeline] // retry 00:00:05.588 [Pipeline] sh 00:00:05.870 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.882 [Pipeline] httpRequest 00:00:06.227 [Pipeline] echo 00:00:06.229 Sorcerer 10.211.164.20 is alive 00:00:06.235 [Pipeline] retry 00:00:06.237 [Pipeline] { 00:00:06.247 [Pipeline] httpRequest 00:00:06.253 HttpMethod: GET 00:00:06.253 URL: http://10.211.164.20/packages/spdk_a333974e53dcb7e60b097445a793459e0a17216f.tar.gz 00:00:06.254 Sending request to url: http://10.211.164.20/packages/spdk_a333974e53dcb7e60b097445a793459e0a17216f.tar.gz 00:00:06.260 Response Code: HTTP/1.1 200 OK 00:00:06.263 Success: Status code 200 is in the accepted range: 200,404 00:00:06.264 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_a333974e53dcb7e60b097445a793459e0a17216f.tar.gz 00:02:17.213 [Pipeline] } 00:02:17.231 [Pipeline] // retry 00:02:17.240 [Pipeline] sh 00:02:17.523 + tar --no-same-owner -xf spdk_a333974e53dcb7e60b097445a793459e0a17216f.tar.gz 00:02:20.074 [Pipeline] sh 00:02:20.357 + git -C spdk log --oneline -n5 00:02:20.357 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:02:20.357 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:02:20.357 e2dfdf06c accel/mlx5: Register post_poller handler 00:02:20.357 3c8001115 accel/mlx5: More precise condition to update DB 00:02:20.357 98eca6fa0 lib/thread: Add API to register a post poller handler 00:02:20.376 [Pipeline] writeFile 00:02:20.389 [Pipeline] sh 00:02:20.674 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:20.689 [Pipeline] sh 00:02:20.971 + cat autorun-spdk.conf 00:02:20.971 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.971 SPDK_RUN_ASAN=1 00:02:20.971 SPDK_RUN_UBSAN=1 00:02:20.971 SPDK_TEST_RAID=1 00:02:20.971 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.977 RUN_NIGHTLY=0 00:02:20.979 [Pipeline] } 00:02:20.995 [Pipeline] // stage 00:02:21.012 [Pipeline] stage 00:02:21.015 [Pipeline] { (Run VM) 00:02:21.030 [Pipeline] sh 00:02:21.314 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:21.314 + echo 'Start stage prepare_nvme.sh' 00:02:21.314 Start stage prepare_nvme.sh 00:02:21.314 + [[ -n 7 ]] 00:02:21.314 + disk_prefix=ex7 00:02:21.314 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:02:21.314 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:02:21.314 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:02:21.314 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:21.314 ++ SPDK_RUN_ASAN=1 00:02:21.314 ++ SPDK_RUN_UBSAN=1 00:02:21.314 ++ SPDK_TEST_RAID=1 00:02:21.314 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:21.314 ++ RUN_NIGHTLY=0 00:02:21.314 + cd /var/jenkins/workspace/raid-vg-autotest 00:02:21.314 + nvme_files=() 00:02:21.314 + declare -A nvme_files 00:02:21.314 + backend_dir=/var/lib/libvirt/images/backends 00:02:21.314 + nvme_files['nvme.img']=5G 00:02:21.314 + nvme_files['nvme-cmb.img']=5G 00:02:21.314 + nvme_files['nvme-multi0.img']=4G 00:02:21.314 + nvme_files['nvme-multi1.img']=4G 00:02:21.314 + nvme_files['nvme-multi2.img']=4G 00:02:21.314 + nvme_files['nvme-openstack.img']=8G 00:02:21.314 + nvme_files['nvme-zns.img']=5G 00:02:21.314 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:21.314 + (( SPDK_TEST_FTL == 1 )) 00:02:21.314 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:21.314 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:21.314 + for nvme in "${!nvme_files[@]}" 00:02:21.314 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:02:21.314 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:21.314 + for nvme in "${!nvme_files[@]}" 00:02:21.314 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:02:21.314 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:21.314 + for nvme in "${!nvme_files[@]}" 00:02:21.314 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:02:21.314 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:21.314 + for nvme in "${!nvme_files[@]}" 00:02:21.315 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:02:21.315 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:21.315 + for nvme in "${!nvme_files[@]}" 00:02:21.315 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:02:21.315 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:21.315 + for nvme in "${!nvme_files[@]}" 00:02:21.315 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:02:21.315 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:21.315 + for nvme in "${!nvme_files[@]}" 00:02:21.315 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:02:21.574 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:21.574 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:02:21.574 + echo 'End stage prepare_nvme.sh' 00:02:21.574 End stage prepare_nvme.sh 00:02:21.585 [Pipeline] sh 00:02:21.916 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:21.917 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:02:21.917 00:02:21.917 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:02:21.917 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:02:21.917 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:02:21.917 HELP=0 00:02:21.917 DRY_RUN=0 00:02:21.917 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:02:21.917 NVME_DISKS_TYPE=nvme,nvme, 00:02:21.917 NVME_AUTO_CREATE=0 00:02:21.917 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:02:21.917 NVME_CMB=,, 00:02:21.917 NVME_PMR=,, 00:02:21.917 NVME_ZNS=,, 00:02:21.917 NVME_MS=,, 00:02:21.917 NVME_FDP=,, 00:02:21.917 SPDK_VAGRANT_DISTRO=fedora39 00:02:21.917 SPDK_VAGRANT_VMCPU=10 00:02:21.917 SPDK_VAGRANT_VMRAM=12288 00:02:21.917 SPDK_VAGRANT_PROVIDER=libvirt 00:02:21.917 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:21.917 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:21.917 SPDK_OPENSTACK_NETWORK=0 00:02:21.917 VAGRANT_PACKAGE_BOX=0 00:02:21.917 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:21.917 FORCE_DISTRO=true 00:02:21.917 VAGRANT_BOX_VERSION= 00:02:21.917 EXTRA_VAGRANTFILES= 00:02:21.917 NIC_MODEL=virtio 00:02:21.917 00:02:21.917 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:02:21.917 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:02:24.453 Bringing machine 'default' up with 'libvirt' provider... 00:02:24.712 ==> default: Creating image (snapshot of base box volume). 00:02:24.972 ==> default: Creating domain with the following settings... 00:02:24.972 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733428525_3bffa855eca312a147fc 00:02:24.972 ==> default: -- Domain type: kvm 00:02:24.972 ==> default: -- Cpus: 10 00:02:24.972 ==> default: -- Feature: acpi 00:02:24.972 ==> default: -- Feature: apic 00:02:24.972 ==> default: -- Feature: pae 00:02:24.972 ==> default: -- Memory: 12288M 00:02:24.972 ==> default: -- Memory Backing: hugepages: 00:02:24.972 ==> default: -- Management MAC: 00:02:24.972 ==> default: -- Loader: 00:02:24.972 ==> default: -- Nvram: 00:02:24.972 ==> default: -- Base box: spdk/fedora39 00:02:24.972 ==> default: -- Storage pool: default 00:02:24.972 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733428525_3bffa855eca312a147fc.img (20G) 00:02:24.972 ==> default: -- Volume Cache: default 00:02:24.972 ==> default: -- Kernel: 00:02:24.972 ==> default: -- Initrd: 00:02:24.972 ==> default: -- Graphics Type: vnc 00:02:24.972 ==> default: -- Graphics Port: -1 00:02:24.972 ==> default: -- Graphics IP: 127.0.0.1 00:02:24.972 ==> default: -- Graphics Password: Not defined 00:02:24.972 ==> default: -- Video Type: cirrus 00:02:24.972 ==> default: -- Video VRAM: 9216 00:02:24.972 ==> default: -- Sound Type: 00:02:24.972 ==> default: -- Keymap: en-us 00:02:24.972 ==> default: -- TPM Path: 00:02:24.972 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:24.972 ==> default: -- Command line args: 00:02:24.972 ==> default: -> value=-device, 00:02:24.972 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:24.972 ==> default: -> value=-drive, 00:02:24.972 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:02:24.972 ==> default: -> value=-device, 00:02:24.972 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:24.972 ==> default: -> value=-device, 00:02:24.972 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:24.972 ==> default: -> value=-drive, 00:02:24.972 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:24.972 ==> default: -> value=-device, 00:02:24.972 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:24.972 ==> default: -> value=-drive, 00:02:24.972 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:24.972 ==> default: -> value=-device, 00:02:24.972 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:24.972 ==> default: -> value=-drive, 00:02:24.972 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:24.972 ==> default: -> value=-device, 00:02:24.972 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:24.972 ==> default: Creating shared folders metadata... 00:02:24.972 ==> default: Starting domain. 00:02:26.351 ==> default: Waiting for domain to get an IP address... 00:02:52.903 ==> default: Waiting for SSH to become available... 00:02:52.903 ==> default: Configuring and enabling network interfaces... 00:02:58.178 default: SSH address: 192.168.121.182:22 00:02:58.178 default: SSH username: vagrant 00:02:58.178 default: SSH auth method: private key 00:03:01.471 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:11.515 ==> default: Mounting SSHFS shared folder... 00:03:12.891 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:12.891 ==> default: Checking Mount.. 00:03:14.268 ==> default: Folder Successfully Mounted! 00:03:14.268 ==> default: Running provisioner: file... 00:03:15.680 default: ~/.gitconfig => .gitconfig 00:03:15.937 00:03:15.937 SUCCESS! 00:03:15.937 00:03:15.937 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:15.937 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:15.937 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:15.937 00:03:15.945 [Pipeline] } 00:03:15.960 [Pipeline] // stage 00:03:15.967 [Pipeline] dir 00:03:15.967 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:03:15.969 [Pipeline] { 00:03:15.981 [Pipeline] catchError 00:03:15.982 [Pipeline] { 00:03:15.995 [Pipeline] sh 00:03:16.273 + vagrant ssh-config --host vagrant 00:03:16.273 + sed -ne /^Host/,$p 00:03:16.273 + tee ssh_conf 00:03:18.805 Host vagrant 00:03:18.805 HostName 192.168.121.182 00:03:18.805 User vagrant 00:03:18.805 Port 22 00:03:18.805 UserKnownHostsFile /dev/null 00:03:18.805 StrictHostKeyChecking no 00:03:18.805 PasswordAuthentication no 00:03:18.805 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:18.805 IdentitiesOnly yes 00:03:18.805 LogLevel FATAL 00:03:18.805 ForwardAgent yes 00:03:18.805 ForwardX11 yes 00:03:18.805 00:03:18.818 [Pipeline] withEnv 00:03:18.821 [Pipeline] { 00:03:18.835 [Pipeline] sh 00:03:19.130 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:19.130 source /etc/os-release 00:03:19.130 [[ -e /image.version ]] && img=$(< /image.version) 00:03:19.130 # Minimal, systemd-like check. 00:03:19.130 if [[ -e /.dockerenv ]]; then 00:03:19.130 # Clear garbage from the node's name: 00:03:19.130 # agt-er_autotest_547-896 -> autotest_547-896 00:03:19.130 # $HOSTNAME is the actual container id 00:03:19.130 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:19.130 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:19.130 # We can assume this is a mount from a host where container is running, 00:03:19.130 # so fetch its hostname to easily identify the target swarm worker. 00:03:19.130 container="$(< /etc/hostname) ($agent)" 00:03:19.130 else 00:03:19.130 # Fallback 00:03:19.130 container=$agent 00:03:19.130 fi 00:03:19.130 fi 00:03:19.130 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:19.130 00:03:19.411 [Pipeline] } 00:03:19.429 [Pipeline] // withEnv 00:03:19.437 [Pipeline] setCustomBuildProperty 00:03:19.451 [Pipeline] stage 00:03:19.453 [Pipeline] { (Tests) 00:03:19.469 [Pipeline] sh 00:03:19.748 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:20.019 [Pipeline] sh 00:03:20.300 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:20.570 [Pipeline] timeout 00:03:20.571 Timeout set to expire in 1 hr 30 min 00:03:20.572 [Pipeline] { 00:03:20.584 [Pipeline] sh 00:03:20.868 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:21.436 HEAD is now at a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:03:21.448 [Pipeline] sh 00:03:21.731 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:22.004 [Pipeline] sh 00:03:22.286 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:22.562 [Pipeline] sh 00:03:22.840 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:03:23.099 ++ readlink -f spdk_repo 00:03:23.099 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:23.099 + [[ -n /home/vagrant/spdk_repo ]] 00:03:23.099 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:23.099 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:23.099 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:23.099 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:23.099 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:23.099 + [[ raid-vg-autotest == pkgdep-* ]] 00:03:23.099 + cd /home/vagrant/spdk_repo 00:03:23.099 + source /etc/os-release 00:03:23.099 ++ NAME='Fedora Linux' 00:03:23.099 ++ VERSION='39 (Cloud Edition)' 00:03:23.099 ++ ID=fedora 00:03:23.099 ++ VERSION_ID=39 00:03:23.099 ++ VERSION_CODENAME= 00:03:23.099 ++ PLATFORM_ID=platform:f39 00:03:23.099 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:23.099 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:23.099 ++ LOGO=fedora-logo-icon 00:03:23.099 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:23.099 ++ HOME_URL=https://fedoraproject.org/ 00:03:23.099 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:23.099 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:23.099 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:23.099 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:23.099 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:23.099 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:23.099 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:23.099 ++ SUPPORT_END=2024-11-12 00:03:23.099 ++ VARIANT='Cloud Edition' 00:03:23.099 ++ VARIANT_ID=cloud 00:03:23.099 + uname -a 00:03:23.099 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:23.099 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:23.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:23.667 Hugepages 00:03:23.667 node hugesize free / total 00:03:23.667 node0 1048576kB 0 / 0 00:03:23.667 node0 2048kB 0 / 0 00:03:23.667 00:03:23.667 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:23.667 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:23.667 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:23.667 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:23.667 + rm -f /tmp/spdk-ld-path 00:03:23.667 + source autorun-spdk.conf 00:03:23.667 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:23.667 ++ SPDK_RUN_ASAN=1 00:03:23.667 ++ SPDK_RUN_UBSAN=1 00:03:23.667 ++ SPDK_TEST_RAID=1 00:03:23.667 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:23.667 ++ RUN_NIGHTLY=0 00:03:23.667 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:23.667 + [[ -n '' ]] 00:03:23.667 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:23.667 + for M in /var/spdk/build-*-manifest.txt 00:03:23.667 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:23.667 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:23.667 + for M in /var/spdk/build-*-manifest.txt 00:03:23.667 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:23.667 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:23.667 + for M in /var/spdk/build-*-manifest.txt 00:03:23.667 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:23.667 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:23.667 ++ uname 00:03:23.667 + [[ Linux == \L\i\n\u\x ]] 00:03:23.667 + sudo dmesg -T 00:03:23.925 + sudo dmesg --clear 00:03:23.925 + dmesg_pid=5442 00:03:23.925 + [[ Fedora Linux == FreeBSD ]] 00:03:23.925 + sudo dmesg -Tw 00:03:23.925 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:23.925 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:23.925 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:23.925 + [[ -x /usr/src/fio-static/fio ]] 00:03:23.925 + export FIO_BIN=/usr/src/fio-static/fio 00:03:23.925 + FIO_BIN=/usr/src/fio-static/fio 00:03:23.925 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:23.925 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:23.925 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:23.925 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:23.925 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:23.925 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:23.925 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:23.925 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:23.925 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:23.925 19:56:25 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:23.925 19:56:25 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:23.925 19:56:25 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:23.925 19:56:25 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:03:23.925 19:56:25 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:03:23.925 19:56:25 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:03:23.925 19:56:25 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:23.925 19:56:25 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:03:23.925 19:56:25 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:23.925 19:56:25 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:23.925 19:56:25 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:23.925 19:56:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:23.925 19:56:25 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:23.925 19:56:25 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:23.925 19:56:25 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:23.925 19:56:25 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:23.925 19:56:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.925 19:56:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.925 19:56:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.925 19:56:25 -- paths/export.sh@5 -- $ export PATH 00:03:23.925 19:56:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.925 19:56:25 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:23.925 19:56:25 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:24.194 19:56:25 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733428585.XXXXXX 00:03:24.194 19:56:25 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733428585.JoCohF 00:03:24.194 19:56:25 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:24.194 19:56:25 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:24.194 19:56:25 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:24.194 19:56:25 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:24.194 19:56:25 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:24.194 19:56:25 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:24.194 19:56:25 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:24.194 19:56:25 -- common/autotest_common.sh@10 -- $ set +x 00:03:24.194 19:56:25 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:03:24.194 19:56:25 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:24.195 19:56:25 -- pm/common@17 -- $ local monitor 00:03:24.195 19:56:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.195 19:56:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.195 19:56:25 -- pm/common@25 -- $ sleep 1 00:03:24.195 19:56:25 -- pm/common@21 -- $ date +%s 00:03:24.195 19:56:25 -- pm/common@21 -- $ date +%s 00:03:24.195 19:56:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733428585 00:03:24.195 19:56:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733428585 00:03:24.195 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733428585_collect-cpu-load.pm.log 00:03:24.195 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733428585_collect-vmstat.pm.log 00:03:25.146 19:56:26 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:25.146 19:56:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:25.146 19:56:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:25.146 19:56:26 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:25.146 19:56:26 -- spdk/autobuild.sh@16 -- $ date -u 00:03:25.146 Thu Dec 5 07:56:26 PM UTC 2024 00:03:25.146 19:56:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:25.146 v25.01-pre-302-ga333974e5 00:03:25.146 19:56:26 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:25.146 19:56:26 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:25.146 19:56:26 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:25.146 19:56:26 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:25.146 19:56:26 -- common/autotest_common.sh@10 -- $ set +x 00:03:25.146 ************************************ 00:03:25.146 START TEST asan 00:03:25.146 ************************************ 00:03:25.146 using asan 00:03:25.146 19:56:26 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:25.146 00:03:25.146 real 0m0.001s 00:03:25.146 user 0m0.000s 00:03:25.146 sys 0m0.000s 00:03:25.146 19:56:26 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:25.146 19:56:26 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:25.146 ************************************ 00:03:25.146 END TEST asan 00:03:25.146 ************************************ 00:03:25.146 19:56:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:25.146 19:56:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:25.146 19:56:26 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:25.146 19:56:26 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:25.146 19:56:26 -- common/autotest_common.sh@10 -- $ set +x 00:03:25.146 ************************************ 00:03:25.146 START TEST ubsan 00:03:25.146 ************************************ 00:03:25.146 using ubsan 00:03:25.146 19:56:26 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:25.146 00:03:25.146 real 0m0.000s 00:03:25.146 user 0m0.000s 00:03:25.146 sys 0m0.000s 00:03:25.146 19:56:26 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:25.146 19:56:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:25.146 ************************************ 00:03:25.147 END TEST ubsan 00:03:25.147 ************************************ 00:03:25.147 19:56:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:25.147 19:56:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:25.147 19:56:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:25.147 19:56:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:25.147 19:56:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:25.147 19:56:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:25.147 19:56:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:25.147 19:56:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:25.147 19:56:26 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:03:25.413 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:25.413 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:25.981 Using 'verbs' RDMA provider 00:03:41.814 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:59.934 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:59.934 Creating mk/config.mk...done. 00:03:59.934 Creating mk/cc.flags.mk...done. 00:03:59.934 Type 'make' to build. 00:03:59.934 19:56:59 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:59.934 19:56:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:59.934 19:56:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:59.934 19:56:59 -- common/autotest_common.sh@10 -- $ set +x 00:03:59.934 ************************************ 00:03:59.934 START TEST make 00:03:59.934 ************************************ 00:03:59.934 19:56:59 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:59.934 make[1]: Nothing to be done for 'all'. 00:04:12.144 The Meson build system 00:04:12.144 Version: 1.5.0 00:04:12.144 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:12.144 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:12.144 Build type: native build 00:04:12.144 Program cat found: YES (/usr/bin/cat) 00:04:12.144 Project name: DPDK 00:04:12.144 Project version: 24.03.0 00:04:12.144 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:12.144 C linker for the host machine: cc ld.bfd 2.40-14 00:04:12.144 Host machine cpu family: x86_64 00:04:12.144 Host machine cpu: x86_64 00:04:12.144 Message: ## Building in Developer Mode ## 00:04:12.144 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:12.144 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:12.144 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:12.144 Program python3 found: YES (/usr/bin/python3) 00:04:12.144 Program cat found: YES (/usr/bin/cat) 00:04:12.144 Compiler for C supports arguments -march=native: YES 00:04:12.144 Checking for size of "void *" : 8 00:04:12.144 Checking for size of "void *" : 8 (cached) 00:04:12.144 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:12.144 Library m found: YES 00:04:12.144 Library numa found: YES 00:04:12.144 Has header "numaif.h" : YES 00:04:12.144 Library fdt found: NO 00:04:12.144 Library execinfo found: NO 00:04:12.144 Has header "execinfo.h" : YES 00:04:12.144 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:12.144 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:12.144 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:12.144 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:12.144 Run-time dependency openssl found: YES 3.1.1 00:04:12.144 Run-time dependency libpcap found: YES 1.10.4 00:04:12.144 Has header "pcap.h" with dependency libpcap: YES 00:04:12.144 Compiler for C supports arguments -Wcast-qual: YES 00:04:12.144 Compiler for C supports arguments -Wdeprecated: YES 00:04:12.144 Compiler for C supports arguments -Wformat: YES 00:04:12.144 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:12.144 Compiler for C supports arguments -Wformat-security: NO 00:04:12.144 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:12.144 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:12.144 Compiler for C supports arguments -Wnested-externs: YES 00:04:12.144 Compiler for C supports arguments -Wold-style-definition: YES 00:04:12.144 Compiler for C supports arguments -Wpointer-arith: YES 00:04:12.144 Compiler for C supports arguments -Wsign-compare: YES 00:04:12.144 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:12.144 Compiler for C supports arguments -Wundef: YES 00:04:12.144 Compiler for C supports arguments -Wwrite-strings: YES 00:04:12.144 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:12.144 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:12.144 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:12.144 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:12.144 Program objdump found: YES (/usr/bin/objdump) 00:04:12.144 Compiler for C supports arguments -mavx512f: YES 00:04:12.144 Checking if "AVX512 checking" compiles: YES 00:04:12.144 Fetching value of define "__SSE4_2__" : 1 00:04:12.144 Fetching value of define "__AES__" : 1 00:04:12.144 Fetching value of define "__AVX__" : 1 00:04:12.144 Fetching value of define "__AVX2__" : 1 00:04:12.144 Fetching value of define "__AVX512BW__" : 1 00:04:12.144 Fetching value of define "__AVX512CD__" : 1 00:04:12.144 Fetching value of define "__AVX512DQ__" : 1 00:04:12.144 Fetching value of define "__AVX512F__" : 1 00:04:12.144 Fetching value of define "__AVX512VL__" : 1 00:04:12.144 Fetching value of define "__PCLMUL__" : 1 00:04:12.144 Fetching value of define "__RDRND__" : 1 00:04:12.144 Fetching value of define "__RDSEED__" : 1 00:04:12.144 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:12.144 Fetching value of define "__znver1__" : (undefined) 00:04:12.144 Fetching value of define "__znver2__" : (undefined) 00:04:12.144 Fetching value of define "__znver3__" : (undefined) 00:04:12.144 Fetching value of define "__znver4__" : (undefined) 00:04:12.144 Library asan found: YES 00:04:12.144 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:12.144 Message: lib/log: Defining dependency "log" 00:04:12.144 Message: lib/kvargs: Defining dependency "kvargs" 00:04:12.144 Message: lib/telemetry: Defining dependency "telemetry" 00:04:12.144 Library rt found: YES 00:04:12.144 Checking for function "getentropy" : NO 00:04:12.144 Message: lib/eal: Defining dependency "eal" 00:04:12.144 Message: lib/ring: Defining dependency "ring" 00:04:12.144 Message: lib/rcu: Defining dependency "rcu" 00:04:12.144 Message: lib/mempool: Defining dependency "mempool" 00:04:12.144 Message: lib/mbuf: Defining dependency "mbuf" 00:04:12.144 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:12.144 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:12.144 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:12.144 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:12.144 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:12.144 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:12.144 Compiler for C supports arguments -mpclmul: YES 00:04:12.144 Compiler for C supports arguments -maes: YES 00:04:12.144 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:12.144 Compiler for C supports arguments -mavx512bw: YES 00:04:12.144 Compiler for C supports arguments -mavx512dq: YES 00:04:12.144 Compiler for C supports arguments -mavx512vl: YES 00:04:12.144 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:12.144 Compiler for C supports arguments -mavx2: YES 00:04:12.144 Compiler for C supports arguments -mavx: YES 00:04:12.144 Message: lib/net: Defining dependency "net" 00:04:12.144 Message: lib/meter: Defining dependency "meter" 00:04:12.144 Message: lib/ethdev: Defining dependency "ethdev" 00:04:12.144 Message: lib/pci: Defining dependency "pci" 00:04:12.144 Message: lib/cmdline: Defining dependency "cmdline" 00:04:12.144 Message: lib/hash: Defining dependency "hash" 00:04:12.144 Message: lib/timer: Defining dependency "timer" 00:04:12.144 Message: lib/compressdev: Defining dependency "compressdev" 00:04:12.144 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:12.144 Message: lib/dmadev: Defining dependency "dmadev" 00:04:12.144 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:12.144 Message: lib/power: Defining dependency "power" 00:04:12.144 Message: lib/reorder: Defining dependency "reorder" 00:04:12.144 Message: lib/security: Defining dependency "security" 00:04:12.144 Has header "linux/userfaultfd.h" : YES 00:04:12.144 Has header "linux/vduse.h" : YES 00:04:12.144 Message: lib/vhost: Defining dependency "vhost" 00:04:12.144 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:12.144 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:12.144 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:12.144 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:12.144 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:12.144 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:12.144 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:12.144 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:12.144 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:12.144 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:12.144 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:12.144 Configuring doxy-api-html.conf using configuration 00:04:12.144 Configuring doxy-api-man.conf using configuration 00:04:12.144 Program mandb found: YES (/usr/bin/mandb) 00:04:12.144 Program sphinx-build found: NO 00:04:12.144 Configuring rte_build_config.h using configuration 00:04:12.144 Message: 00:04:12.144 ================= 00:04:12.144 Applications Enabled 00:04:12.144 ================= 00:04:12.144 00:04:12.144 apps: 00:04:12.144 00:04:12.144 00:04:12.144 Message: 00:04:12.144 ================= 00:04:12.144 Libraries Enabled 00:04:12.144 ================= 00:04:12.144 00:04:12.144 libs: 00:04:12.144 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:12.144 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:12.144 cryptodev, dmadev, power, reorder, security, vhost, 00:04:12.144 00:04:12.144 Message: 00:04:12.144 =============== 00:04:12.144 Drivers Enabled 00:04:12.144 =============== 00:04:12.144 00:04:12.144 common: 00:04:12.144 00:04:12.144 bus: 00:04:12.144 pci, vdev, 00:04:12.144 mempool: 00:04:12.144 ring, 00:04:12.144 dma: 00:04:12.144 00:04:12.144 net: 00:04:12.144 00:04:12.144 crypto: 00:04:12.144 00:04:12.144 compress: 00:04:12.144 00:04:12.144 vdpa: 00:04:12.144 00:04:12.144 00:04:12.144 Message: 00:04:12.144 ================= 00:04:12.144 Content Skipped 00:04:12.144 ================= 00:04:12.144 00:04:12.144 apps: 00:04:12.144 dumpcap: explicitly disabled via build config 00:04:12.144 graph: explicitly disabled via build config 00:04:12.144 pdump: explicitly disabled via build config 00:04:12.144 proc-info: explicitly disabled via build config 00:04:12.144 test-acl: explicitly disabled via build config 00:04:12.144 test-bbdev: explicitly disabled via build config 00:04:12.144 test-cmdline: explicitly disabled via build config 00:04:12.144 test-compress-perf: explicitly disabled via build config 00:04:12.144 test-crypto-perf: explicitly disabled via build config 00:04:12.144 test-dma-perf: explicitly disabled via build config 00:04:12.144 test-eventdev: explicitly disabled via build config 00:04:12.144 test-fib: explicitly disabled via build config 00:04:12.144 test-flow-perf: explicitly disabled via build config 00:04:12.145 test-gpudev: explicitly disabled via build config 00:04:12.145 test-mldev: explicitly disabled via build config 00:04:12.145 test-pipeline: explicitly disabled via build config 00:04:12.145 test-pmd: explicitly disabled via build config 00:04:12.145 test-regex: explicitly disabled via build config 00:04:12.145 test-sad: explicitly disabled via build config 00:04:12.145 test-security-perf: explicitly disabled via build config 00:04:12.145 00:04:12.145 libs: 00:04:12.145 argparse: explicitly disabled via build config 00:04:12.145 metrics: explicitly disabled via build config 00:04:12.145 acl: explicitly disabled via build config 00:04:12.145 bbdev: explicitly disabled via build config 00:04:12.145 bitratestats: explicitly disabled via build config 00:04:12.145 bpf: explicitly disabled via build config 00:04:12.145 cfgfile: explicitly disabled via build config 00:04:12.145 distributor: explicitly disabled via build config 00:04:12.145 efd: explicitly disabled via build config 00:04:12.145 eventdev: explicitly disabled via build config 00:04:12.145 dispatcher: explicitly disabled via build config 00:04:12.145 gpudev: explicitly disabled via build config 00:04:12.145 gro: explicitly disabled via build config 00:04:12.145 gso: explicitly disabled via build config 00:04:12.145 ip_frag: explicitly disabled via build config 00:04:12.145 jobstats: explicitly disabled via build config 00:04:12.145 latencystats: explicitly disabled via build config 00:04:12.145 lpm: explicitly disabled via build config 00:04:12.145 member: explicitly disabled via build config 00:04:12.145 pcapng: explicitly disabled via build config 00:04:12.145 rawdev: explicitly disabled via build config 00:04:12.145 regexdev: explicitly disabled via build config 00:04:12.145 mldev: explicitly disabled via build config 00:04:12.145 rib: explicitly disabled via build config 00:04:12.145 sched: explicitly disabled via build config 00:04:12.145 stack: explicitly disabled via build config 00:04:12.145 ipsec: explicitly disabled via build config 00:04:12.145 pdcp: explicitly disabled via build config 00:04:12.145 fib: explicitly disabled via build config 00:04:12.145 port: explicitly disabled via build config 00:04:12.145 pdump: explicitly disabled via build config 00:04:12.145 table: explicitly disabled via build config 00:04:12.145 pipeline: explicitly disabled via build config 00:04:12.145 graph: explicitly disabled via build config 00:04:12.145 node: explicitly disabled via build config 00:04:12.145 00:04:12.145 drivers: 00:04:12.145 common/cpt: not in enabled drivers build config 00:04:12.145 common/dpaax: not in enabled drivers build config 00:04:12.145 common/iavf: not in enabled drivers build config 00:04:12.145 common/idpf: not in enabled drivers build config 00:04:12.145 common/ionic: not in enabled drivers build config 00:04:12.145 common/mvep: not in enabled drivers build config 00:04:12.145 common/octeontx: not in enabled drivers build config 00:04:12.145 bus/auxiliary: not in enabled drivers build config 00:04:12.145 bus/cdx: not in enabled drivers build config 00:04:12.145 bus/dpaa: not in enabled drivers build config 00:04:12.145 bus/fslmc: not in enabled drivers build config 00:04:12.145 bus/ifpga: not in enabled drivers build config 00:04:12.145 bus/platform: not in enabled drivers build config 00:04:12.145 bus/uacce: not in enabled drivers build config 00:04:12.145 bus/vmbus: not in enabled drivers build config 00:04:12.145 common/cnxk: not in enabled drivers build config 00:04:12.145 common/mlx5: not in enabled drivers build config 00:04:12.145 common/nfp: not in enabled drivers build config 00:04:12.145 common/nitrox: not in enabled drivers build config 00:04:12.145 common/qat: not in enabled drivers build config 00:04:12.145 common/sfc_efx: not in enabled drivers build config 00:04:12.145 mempool/bucket: not in enabled drivers build config 00:04:12.145 mempool/cnxk: not in enabled drivers build config 00:04:12.145 mempool/dpaa: not in enabled drivers build config 00:04:12.145 mempool/dpaa2: not in enabled drivers build config 00:04:12.145 mempool/octeontx: not in enabled drivers build config 00:04:12.145 mempool/stack: not in enabled drivers build config 00:04:12.145 dma/cnxk: not in enabled drivers build config 00:04:12.145 dma/dpaa: not in enabled drivers build config 00:04:12.145 dma/dpaa2: not in enabled drivers build config 00:04:12.145 dma/hisilicon: not in enabled drivers build config 00:04:12.145 dma/idxd: not in enabled drivers build config 00:04:12.145 dma/ioat: not in enabled drivers build config 00:04:12.145 dma/skeleton: not in enabled drivers build config 00:04:12.145 net/af_packet: not in enabled drivers build config 00:04:12.145 net/af_xdp: not in enabled drivers build config 00:04:12.145 net/ark: not in enabled drivers build config 00:04:12.145 net/atlantic: not in enabled drivers build config 00:04:12.145 net/avp: not in enabled drivers build config 00:04:12.145 net/axgbe: not in enabled drivers build config 00:04:12.145 net/bnx2x: not in enabled drivers build config 00:04:12.145 net/bnxt: not in enabled drivers build config 00:04:12.145 net/bonding: not in enabled drivers build config 00:04:12.145 net/cnxk: not in enabled drivers build config 00:04:12.145 net/cpfl: not in enabled drivers build config 00:04:12.145 net/cxgbe: not in enabled drivers build config 00:04:12.145 net/dpaa: not in enabled drivers build config 00:04:12.145 net/dpaa2: not in enabled drivers build config 00:04:12.145 net/e1000: not in enabled drivers build config 00:04:12.145 net/ena: not in enabled drivers build config 00:04:12.145 net/enetc: not in enabled drivers build config 00:04:12.145 net/enetfec: not in enabled drivers build config 00:04:12.145 net/enic: not in enabled drivers build config 00:04:12.145 net/failsafe: not in enabled drivers build config 00:04:12.145 net/fm10k: not in enabled drivers build config 00:04:12.145 net/gve: not in enabled drivers build config 00:04:12.145 net/hinic: not in enabled drivers build config 00:04:12.145 net/hns3: not in enabled drivers build config 00:04:12.145 net/i40e: not in enabled drivers build config 00:04:12.145 net/iavf: not in enabled drivers build config 00:04:12.145 net/ice: not in enabled drivers build config 00:04:12.145 net/idpf: not in enabled drivers build config 00:04:12.145 net/igc: not in enabled drivers build config 00:04:12.145 net/ionic: not in enabled drivers build config 00:04:12.145 net/ipn3ke: not in enabled drivers build config 00:04:12.145 net/ixgbe: not in enabled drivers build config 00:04:12.145 net/mana: not in enabled drivers build config 00:04:12.145 net/memif: not in enabled drivers build config 00:04:12.145 net/mlx4: not in enabled drivers build config 00:04:12.145 net/mlx5: not in enabled drivers build config 00:04:12.145 net/mvneta: not in enabled drivers build config 00:04:12.145 net/mvpp2: not in enabled drivers build config 00:04:12.145 net/netvsc: not in enabled drivers build config 00:04:12.145 net/nfb: not in enabled drivers build config 00:04:12.145 net/nfp: not in enabled drivers build config 00:04:12.145 net/ngbe: not in enabled drivers build config 00:04:12.145 net/null: not in enabled drivers build config 00:04:12.145 net/octeontx: not in enabled drivers build config 00:04:12.145 net/octeon_ep: not in enabled drivers build config 00:04:12.145 net/pcap: not in enabled drivers build config 00:04:12.145 net/pfe: not in enabled drivers build config 00:04:12.145 net/qede: not in enabled drivers build config 00:04:12.145 net/ring: not in enabled drivers build config 00:04:12.145 net/sfc: not in enabled drivers build config 00:04:12.145 net/softnic: not in enabled drivers build config 00:04:12.145 net/tap: not in enabled drivers build config 00:04:12.145 net/thunderx: not in enabled drivers build config 00:04:12.145 net/txgbe: not in enabled drivers build config 00:04:12.145 net/vdev_netvsc: not in enabled drivers build config 00:04:12.145 net/vhost: not in enabled drivers build config 00:04:12.145 net/virtio: not in enabled drivers build config 00:04:12.145 net/vmxnet3: not in enabled drivers build config 00:04:12.145 raw/*: missing internal dependency, "rawdev" 00:04:12.145 crypto/armv8: not in enabled drivers build config 00:04:12.145 crypto/bcmfs: not in enabled drivers build config 00:04:12.145 crypto/caam_jr: not in enabled drivers build config 00:04:12.145 crypto/ccp: not in enabled drivers build config 00:04:12.145 crypto/cnxk: not in enabled drivers build config 00:04:12.145 crypto/dpaa_sec: not in enabled drivers build config 00:04:12.145 crypto/dpaa2_sec: not in enabled drivers build config 00:04:12.145 crypto/ipsec_mb: not in enabled drivers build config 00:04:12.145 crypto/mlx5: not in enabled drivers build config 00:04:12.145 crypto/mvsam: not in enabled drivers build config 00:04:12.145 crypto/nitrox: not in enabled drivers build config 00:04:12.145 crypto/null: not in enabled drivers build config 00:04:12.145 crypto/octeontx: not in enabled drivers build config 00:04:12.145 crypto/openssl: not in enabled drivers build config 00:04:12.145 crypto/scheduler: not in enabled drivers build config 00:04:12.145 crypto/uadk: not in enabled drivers build config 00:04:12.145 crypto/virtio: not in enabled drivers build config 00:04:12.145 compress/isal: not in enabled drivers build config 00:04:12.145 compress/mlx5: not in enabled drivers build config 00:04:12.145 compress/nitrox: not in enabled drivers build config 00:04:12.145 compress/octeontx: not in enabled drivers build config 00:04:12.145 compress/zlib: not in enabled drivers build config 00:04:12.145 regex/*: missing internal dependency, "regexdev" 00:04:12.145 ml/*: missing internal dependency, "mldev" 00:04:12.145 vdpa/ifc: not in enabled drivers build config 00:04:12.145 vdpa/mlx5: not in enabled drivers build config 00:04:12.145 vdpa/nfp: not in enabled drivers build config 00:04:12.145 vdpa/sfc: not in enabled drivers build config 00:04:12.145 event/*: missing internal dependency, "eventdev" 00:04:12.145 baseband/*: missing internal dependency, "bbdev" 00:04:12.145 gpu/*: missing internal dependency, "gpudev" 00:04:12.145 00:04:12.145 00:04:12.145 Build targets in project: 85 00:04:12.145 00:04:12.145 DPDK 24.03.0 00:04:12.145 00:04:12.145 User defined options 00:04:12.145 buildtype : debug 00:04:12.145 default_library : shared 00:04:12.145 libdir : lib 00:04:12.145 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:12.145 b_sanitize : address 00:04:12.145 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:12.145 c_link_args : 00:04:12.145 cpu_instruction_set: native 00:04:12.145 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:12.146 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:12.146 enable_docs : false 00:04:12.146 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:12.146 enable_kmods : false 00:04:12.146 max_lcores : 128 00:04:12.146 tests : false 00:04:12.146 00:04:12.146 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:12.146 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:12.146 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:12.146 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:12.146 [3/268] Linking static target lib/librte_kvargs.a 00:04:12.146 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:12.146 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:12.146 [6/268] Linking static target lib/librte_log.a 00:04:12.146 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.146 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:12.146 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:12.146 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:12.146 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:12.146 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:12.146 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:12.146 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:12.146 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:12.146 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:12.146 [17/268] Linking static target lib/librte_telemetry.a 00:04:12.146 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:12.404 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.662 [20/268] Linking target lib/librte_log.so.24.1 00:04:12.662 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:12.662 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:12.921 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:12.921 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:12.921 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:12.921 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:12.921 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:12.921 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:12.921 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:12.921 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.921 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:12.921 [32/268] Linking target lib/librte_kvargs.so.24.1 00:04:13.179 [33/268] Linking target lib/librte_telemetry.so.24.1 00:04:13.179 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:13.179 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:13.437 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:13.437 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:13.437 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:13.437 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:13.437 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:13.696 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:13.696 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:13.696 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:13.696 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:13.696 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:13.696 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:13.953 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:13.953 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:13.953 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:14.210 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:14.210 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:14.210 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:14.467 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:14.467 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:14.467 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:14.467 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:14.467 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:14.724 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:14.724 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:14.724 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:14.724 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:14.982 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:14.982 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:14.982 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:14.982 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:15.240 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:15.240 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:15.497 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:15.497 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:15.497 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:15.756 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:15.756 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:15.756 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:15.756 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:15.756 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:15.756 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:16.014 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:16.014 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:16.014 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:16.014 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:16.273 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:16.273 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:16.532 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:16.532 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:16.532 [85/268] Linking static target lib/librte_eal.a 00:04:16.532 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:16.532 [87/268] Linking static target lib/librte_ring.a 00:04:16.789 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:16.789 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:17.047 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:17.047 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:17.047 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:17.047 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:17.047 [94/268] Linking static target lib/librte_mempool.a 00:04:17.304 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.304 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:17.304 [97/268] Linking static target lib/librte_rcu.a 00:04:17.304 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:17.304 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:17.560 [100/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:17.560 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:17.560 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:17.560 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:17.817 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:17.817 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:17.817 [106/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.817 [107/268] Linking static target lib/librte_net.a 00:04:18.075 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:18.075 [109/268] Linking static target lib/librte_meter.a 00:04:18.075 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:18.075 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:18.334 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:18.334 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.334 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:18.593 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.593 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.593 [117/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:18.593 [118/268] Linking static target lib/librte_mbuf.a 00:04:18.850 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:18.850 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:19.108 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:19.108 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:19.675 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:19.675 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:19.675 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:19.675 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:19.675 [127/268] Linking static target lib/librte_pci.a 00:04:19.675 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:19.675 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:19.959 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:19.959 [131/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.959 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:19.959 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:19.959 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:19.959 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.959 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:19.959 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:20.242 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:20.242 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:20.242 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:20.242 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:20.242 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:20.242 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:20.242 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:20.242 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:20.242 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:20.242 [147/268] Linking static target lib/librte_cmdline.a 00:04:20.501 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:20.501 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:20.759 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:21.017 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:21.017 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:21.017 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:21.017 [154/268] Linking static target lib/librte_timer.a 00:04:21.017 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:21.292 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:21.292 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:21.292 [158/268] Linking static target lib/librte_compressdev.a 00:04:21.292 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:21.292 [160/268] Linking static target lib/librte_ethdev.a 00:04:21.549 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:21.549 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:21.549 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:21.807 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.807 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:21.807 [166/268] Linking static target lib/librte_dmadev.a 00:04:21.807 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:21.807 [168/268] Linking static target lib/librte_hash.a 00:04:21.807 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:22.065 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.065 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:22.065 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:22.323 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:22.323 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.323 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:22.581 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:22.581 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:22.581 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:22.581 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.839 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:22.839 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:23.097 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:23.097 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:23.097 [184/268] Linking static target lib/librte_power.a 00:04:23.354 [185/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:23.354 [186/268] Linking static target lib/librte_cryptodev.a 00:04:23.354 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:23.354 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:23.354 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:23.354 [190/268] Linking static target lib/librte_reorder.a 00:04:23.612 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:23.612 [192/268] Linking static target lib/librte_security.a 00:04:23.612 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:24.177 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.178 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:24.178 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.435 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.435 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:24.435 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:24.692 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:24.692 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:24.692 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:24.950 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:25.208 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:25.208 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:25.208 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:25.208 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:25.208 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:25.208 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:25.467 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:25.726 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:25.726 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:25.726 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:25.726 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:25.726 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:25.726 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:25.726 [217/268] Linking static target drivers/librte_bus_pci.a 00:04:25.726 [218/268] Linking static target drivers/librte_bus_vdev.a 00:04:25.726 [219/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.726 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:25.726 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:25.984 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.984 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:25.984 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:25.984 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:25.984 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:26.242 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.618 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:27.877 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.877 [230/268] Linking target lib/librte_eal.so.24.1 00:04:28.136 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:28.136 [232/268] Linking target lib/librte_pci.so.24.1 00:04:28.136 [233/268] Linking target lib/librte_dmadev.so.24.1 00:04:28.136 [234/268] Linking target lib/librte_meter.so.24.1 00:04:28.136 [235/268] Linking target lib/librte_ring.so.24.1 00:04:28.136 [236/268] Linking target lib/librte_timer.so.24.1 00:04:28.397 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:28.397 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:28.397 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:28.397 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:28.397 [241/268] Linking target lib/librte_rcu.so.24.1 00:04:28.397 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:28.397 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:28.397 [244/268] Linking target lib/librte_mempool.so.24.1 00:04:28.397 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:28.657 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:28.657 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:28.657 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:28.657 [249/268] Linking target lib/librte_mbuf.so.24.1 00:04:28.917 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:28.917 [251/268] Linking target lib/librte_reorder.so.24.1 00:04:28.917 [252/268] Linking target lib/librte_net.so.24.1 00:04:28.917 [253/268] Linking target lib/librte_compressdev.so.24.1 00:04:28.917 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:04:28.917 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:29.177 [256/268] Linking target lib/librte_hash.so.24.1 00:04:29.177 [257/268] Linking target lib/librte_cmdline.so.24.1 00:04:29.177 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:29.177 [259/268] Linking target lib/librte_security.so.24.1 00:04:29.177 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:30.558 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.558 [262/268] Linking target lib/librte_ethdev.so.24.1 00:04:30.818 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:30.818 [264/268] Linking target lib/librte_power.so.24.1 00:04:32.197 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:32.456 [266/268] Linking static target lib/librte_vhost.a 00:04:35.003 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.003 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:35.003 INFO: autodetecting backend as ninja 00:04:35.003 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:53.114 CC lib/ut/ut.o 00:04:53.114 CC lib/ut_mock/mock.o 00:04:53.114 CC lib/log/log.o 00:04:53.114 CC lib/log/log_flags.o 00:04:53.114 CC lib/log/log_deprecated.o 00:04:53.114 LIB libspdk_ut.a 00:04:53.114 SO libspdk_ut.so.2.0 00:04:53.114 LIB libspdk_ut_mock.a 00:04:53.114 LIB libspdk_log.a 00:04:53.114 SO libspdk_ut_mock.so.6.0 00:04:53.114 SYMLINK libspdk_ut.so 00:04:53.114 SO libspdk_log.so.7.1 00:04:53.114 SYMLINK libspdk_ut_mock.so 00:04:53.114 SYMLINK libspdk_log.so 00:04:53.373 CC lib/util/base64.o 00:04:53.373 CC lib/util/bit_array.o 00:04:53.373 CC lib/util/cpuset.o 00:04:53.373 CC lib/util/crc16.o 00:04:53.373 CC lib/util/crc32c.o 00:04:53.373 CC lib/util/crc32.o 00:04:53.373 CC lib/dma/dma.o 00:04:53.373 CC lib/ioat/ioat.o 00:04:53.373 CXX lib/trace_parser/trace.o 00:04:53.631 CC lib/util/crc32_ieee.o 00:04:53.631 CC lib/vfio_user/host/vfio_user_pci.o 00:04:53.631 CC lib/util/crc64.o 00:04:53.631 CC lib/vfio_user/host/vfio_user.o 00:04:53.631 CC lib/util/dif.o 00:04:53.631 LIB libspdk_dma.a 00:04:53.631 CC lib/util/fd.o 00:04:53.631 SO libspdk_dma.so.5.0 00:04:53.631 CC lib/util/fd_group.o 00:04:53.631 CC lib/util/file.o 00:04:53.631 CC lib/util/hexlify.o 00:04:53.631 SYMLINK libspdk_dma.so 00:04:53.631 CC lib/util/iov.o 00:04:53.889 CC lib/util/math.o 00:04:53.889 CC lib/util/net.o 00:04:53.889 LIB libspdk_ioat.a 00:04:53.889 SO libspdk_ioat.so.7.0 00:04:53.889 CC lib/util/pipe.o 00:04:53.889 CC lib/util/strerror_tls.o 00:04:53.889 LIB libspdk_vfio_user.a 00:04:53.889 SYMLINK libspdk_ioat.so 00:04:53.889 CC lib/util/string.o 00:04:53.889 SO libspdk_vfio_user.so.5.0 00:04:53.889 CC lib/util/uuid.o 00:04:53.889 CC lib/util/xor.o 00:04:53.889 CC lib/util/zipf.o 00:04:53.889 SYMLINK libspdk_vfio_user.so 00:04:53.889 CC lib/util/md5.o 00:04:54.457 LIB libspdk_util.a 00:04:54.457 SO libspdk_util.so.10.1 00:04:54.457 LIB libspdk_trace_parser.a 00:04:54.715 SO libspdk_trace_parser.so.6.0 00:04:54.715 SYMLINK libspdk_util.so 00:04:54.715 SYMLINK libspdk_trace_parser.so 00:04:54.974 CC lib/rdma_utils/rdma_utils.o 00:04:54.974 CC lib/env_dpdk/memory.o 00:04:54.974 CC lib/env_dpdk/pci.o 00:04:54.974 CC lib/env_dpdk/threads.o 00:04:54.974 CC lib/env_dpdk/env.o 00:04:54.974 CC lib/idxd/idxd.o 00:04:54.974 CC lib/env_dpdk/init.o 00:04:54.974 CC lib/vmd/vmd.o 00:04:54.974 CC lib/conf/conf.o 00:04:54.974 CC lib/json/json_parse.o 00:04:54.974 CC lib/env_dpdk/pci_ioat.o 00:04:55.233 LIB libspdk_conf.a 00:04:55.233 CC lib/json/json_util.o 00:04:55.233 SO libspdk_conf.so.6.0 00:04:55.233 CC lib/json/json_write.o 00:04:55.233 LIB libspdk_rdma_utils.a 00:04:55.233 SYMLINK libspdk_conf.so 00:04:55.233 CC lib/env_dpdk/pci_virtio.o 00:04:55.233 SO libspdk_rdma_utils.so.1.0 00:04:55.233 CC lib/env_dpdk/pci_vmd.o 00:04:55.233 SYMLINK libspdk_rdma_utils.so 00:04:55.233 CC lib/env_dpdk/pci_idxd.o 00:04:55.506 CC lib/env_dpdk/pci_event.o 00:04:55.506 CC lib/vmd/led.o 00:04:55.506 CC lib/idxd/idxd_user.o 00:04:55.506 CC lib/env_dpdk/sigbus_handler.o 00:04:55.506 LIB libspdk_json.a 00:04:55.506 CC lib/env_dpdk/pci_dpdk.o 00:04:55.506 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:55.506 SO libspdk_json.so.6.0 00:04:55.506 CC lib/rdma_provider/common.o 00:04:55.506 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:55.506 SYMLINK libspdk_json.so 00:04:55.506 CC lib/idxd/idxd_kernel.o 00:04:55.506 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:55.765 LIB libspdk_vmd.a 00:04:55.765 SO libspdk_vmd.so.6.0 00:04:55.765 CC lib/jsonrpc/jsonrpc_server.o 00:04:55.765 CC lib/jsonrpc/jsonrpc_client.o 00:04:55.765 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:55.765 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:55.765 SYMLINK libspdk_vmd.so 00:04:55.765 LIB libspdk_idxd.a 00:04:55.765 LIB libspdk_rdma_provider.a 00:04:55.765 SO libspdk_idxd.so.12.1 00:04:55.765 SO libspdk_rdma_provider.so.7.0 00:04:56.025 SYMLINK libspdk_idxd.so 00:04:56.025 SYMLINK libspdk_rdma_provider.so 00:04:56.025 LIB libspdk_jsonrpc.a 00:04:56.025 SO libspdk_jsonrpc.so.6.0 00:04:56.284 SYMLINK libspdk_jsonrpc.so 00:04:56.544 CC lib/rpc/rpc.o 00:04:56.850 LIB libspdk_env_dpdk.a 00:04:56.850 SO libspdk_env_dpdk.so.15.1 00:04:56.850 LIB libspdk_rpc.a 00:04:56.850 SO libspdk_rpc.so.6.0 00:04:57.109 SYMLINK libspdk_rpc.so 00:04:57.109 SYMLINK libspdk_env_dpdk.so 00:04:57.369 CC lib/trace/trace.o 00:04:57.369 CC lib/trace/trace_rpc.o 00:04:57.369 CC lib/trace/trace_flags.o 00:04:57.369 CC lib/notify/notify.o 00:04:57.369 CC lib/notify/notify_rpc.o 00:04:57.369 CC lib/keyring/keyring.o 00:04:57.369 CC lib/keyring/keyring_rpc.o 00:04:57.630 LIB libspdk_notify.a 00:04:57.630 SO libspdk_notify.so.6.0 00:04:57.630 LIB libspdk_keyring.a 00:04:57.630 LIB libspdk_trace.a 00:04:57.630 SO libspdk_keyring.so.2.0 00:04:57.630 SYMLINK libspdk_notify.so 00:04:57.630 SO libspdk_trace.so.11.0 00:04:57.630 SYMLINK libspdk_keyring.so 00:04:57.889 SYMLINK libspdk_trace.so 00:04:58.148 CC lib/sock/sock.o 00:04:58.148 CC lib/sock/sock_rpc.o 00:04:58.148 CC lib/thread/thread.o 00:04:58.148 CC lib/thread/iobuf.o 00:04:58.715 LIB libspdk_sock.a 00:04:58.715 SO libspdk_sock.so.10.0 00:04:58.715 SYMLINK libspdk_sock.so 00:04:59.283 CC lib/nvme/nvme_ctrlr.o 00:04:59.283 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:59.283 CC lib/nvme/nvme_ns_cmd.o 00:04:59.283 CC lib/nvme/nvme_qpair.o 00:04:59.283 CC lib/nvme/nvme_ns.o 00:04:59.283 CC lib/nvme/nvme_fabric.o 00:04:59.283 CC lib/nvme/nvme_pcie_common.o 00:04:59.283 CC lib/nvme/nvme_pcie.o 00:04:59.283 CC lib/nvme/nvme.o 00:04:59.851 CC lib/nvme/nvme_quirks.o 00:04:59.851 CC lib/nvme/nvme_transport.o 00:04:59.851 CC lib/nvme/nvme_discovery.o 00:05:00.110 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:00.110 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:00.110 CC lib/nvme/nvme_tcp.o 00:05:00.110 LIB libspdk_thread.a 00:05:00.110 SO libspdk_thread.so.11.0 00:05:00.110 CC lib/nvme/nvme_opal.o 00:05:00.110 SYMLINK libspdk_thread.so 00:05:00.110 CC lib/nvme/nvme_io_msg.o 00:05:00.369 CC lib/nvme/nvme_poll_group.o 00:05:00.369 CC lib/nvme/nvme_zns.o 00:05:00.628 CC lib/nvme/nvme_stubs.o 00:05:00.628 CC lib/nvme/nvme_auth.o 00:05:00.628 CC lib/nvme/nvme_cuse.o 00:05:00.628 CC lib/accel/accel.o 00:05:01.202 CC lib/blob/blobstore.o 00:05:01.202 CC lib/init/json_config.o 00:05:01.202 CC lib/init/subsystem.o 00:05:01.202 CC lib/init/subsystem_rpc.o 00:05:01.202 CC lib/blob/request.o 00:05:01.476 CC lib/accel/accel_rpc.o 00:05:01.476 CC lib/virtio/virtio.o 00:05:01.476 CC lib/init/rpc.o 00:05:01.734 CC lib/virtio/virtio_vhost_user.o 00:05:01.734 CC lib/virtio/virtio_vfio_user.o 00:05:01.734 CC lib/virtio/virtio_pci.o 00:05:01.734 LIB libspdk_init.a 00:05:01.734 CC lib/nvme/nvme_rdma.o 00:05:01.734 SO libspdk_init.so.6.0 00:05:01.992 CC lib/accel/accel_sw.o 00:05:01.992 SYMLINK libspdk_init.so 00:05:01.992 CC lib/blob/zeroes.o 00:05:01.992 CC lib/blob/blob_bs_dev.o 00:05:01.992 CC lib/fsdev/fsdev.o 00:05:01.992 CC lib/event/app.o 00:05:02.251 CC lib/fsdev/fsdev_rpc.o 00:05:02.251 CC lib/fsdev/fsdev_io.o 00:05:02.251 CC lib/event/reactor.o 00:05:02.251 LIB libspdk_virtio.a 00:05:02.251 SO libspdk_virtio.so.7.0 00:05:02.251 CC lib/event/log_rpc.o 00:05:02.251 LIB libspdk_accel.a 00:05:02.251 CC lib/event/app_rpc.o 00:05:02.251 SO libspdk_accel.so.16.0 00:05:02.251 SYMLINK libspdk_virtio.so 00:05:02.510 CC lib/event/scheduler_static.o 00:05:02.510 SYMLINK libspdk_accel.so 00:05:02.768 CC lib/bdev/bdev_zone.o 00:05:02.768 CC lib/bdev/bdev_rpc.o 00:05:02.768 CC lib/bdev/bdev.o 00:05:02.768 CC lib/bdev/part.o 00:05:02.768 CC lib/bdev/scsi_nvme.o 00:05:02.768 LIB libspdk_event.a 00:05:02.768 SO libspdk_event.so.14.0 00:05:03.027 LIB libspdk_fsdev.a 00:05:03.027 SYMLINK libspdk_event.so 00:05:03.027 SO libspdk_fsdev.so.2.0 00:05:03.027 SYMLINK libspdk_fsdev.so 00:05:03.594 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:03.594 LIB libspdk_nvme.a 00:05:03.852 SO libspdk_nvme.so.15.0 00:05:04.112 SYMLINK libspdk_nvme.so 00:05:04.372 LIB libspdk_fuse_dispatcher.a 00:05:04.372 SO libspdk_fuse_dispatcher.so.1.0 00:05:04.372 SYMLINK libspdk_fuse_dispatcher.so 00:05:05.396 LIB libspdk_blob.a 00:05:05.655 SO libspdk_blob.so.12.0 00:05:05.655 SYMLINK libspdk_blob.so 00:05:05.913 CC lib/blobfs/blobfs.o 00:05:05.913 CC lib/blobfs/tree.o 00:05:05.913 CC lib/lvol/lvol.o 00:05:06.173 LIB libspdk_bdev.a 00:05:06.173 SO libspdk_bdev.so.17.0 00:05:06.173 SYMLINK libspdk_bdev.so 00:05:06.432 CC lib/ftl/ftl_core.o 00:05:06.432 CC lib/ftl/ftl_debug.o 00:05:06.432 CC lib/nbd/nbd.o 00:05:06.432 CC lib/ftl/ftl_init.o 00:05:06.432 CC lib/ftl/ftl_layout.o 00:05:06.432 CC lib/nvmf/ctrlr.o 00:05:06.432 CC lib/scsi/dev.o 00:05:06.432 CC lib/ublk/ublk.o 00:05:06.690 CC lib/ublk/ublk_rpc.o 00:05:06.690 CC lib/nvmf/ctrlr_discovery.o 00:05:06.948 CC lib/scsi/lun.o 00:05:06.948 CC lib/scsi/port.o 00:05:06.948 CC lib/scsi/scsi.o 00:05:06.948 CC lib/ftl/ftl_io.o 00:05:06.948 LIB libspdk_blobfs.a 00:05:06.948 CC lib/nbd/nbd_rpc.o 00:05:06.948 SO libspdk_blobfs.so.11.0 00:05:07.205 CC lib/nvmf/ctrlr_bdev.o 00:05:07.205 SYMLINK libspdk_blobfs.so 00:05:07.205 CC lib/ftl/ftl_sb.o 00:05:07.205 CC lib/scsi/scsi_bdev.o 00:05:07.205 LIB libspdk_lvol.a 00:05:07.205 CC lib/scsi/scsi_pr.o 00:05:07.205 LIB libspdk_nbd.a 00:05:07.205 SO libspdk_lvol.so.11.0 00:05:07.205 SO libspdk_nbd.so.7.0 00:05:07.205 SYMLINK libspdk_lvol.so 00:05:07.205 SYMLINK libspdk_nbd.so 00:05:07.205 CC lib/ftl/ftl_l2p.o 00:05:07.205 CC lib/scsi/scsi_rpc.o 00:05:07.206 CC lib/nvmf/subsystem.o 00:05:07.206 CC lib/ftl/ftl_l2p_flat.o 00:05:07.464 LIB libspdk_ublk.a 00:05:07.464 CC lib/scsi/task.o 00:05:07.464 SO libspdk_ublk.so.3.0 00:05:07.464 CC lib/nvmf/nvmf.o 00:05:07.464 SYMLINK libspdk_ublk.so 00:05:07.464 CC lib/ftl/ftl_nv_cache.o 00:05:07.464 CC lib/ftl/ftl_band.o 00:05:07.464 CC lib/ftl/ftl_band_ops.o 00:05:07.464 CC lib/nvmf/nvmf_rpc.o 00:05:07.722 CC lib/ftl/ftl_writer.o 00:05:07.722 LIB libspdk_scsi.a 00:05:07.722 SO libspdk_scsi.so.9.0 00:05:07.980 SYMLINK libspdk_scsi.so 00:05:07.980 CC lib/nvmf/transport.o 00:05:07.980 CC lib/nvmf/tcp.o 00:05:07.980 CC lib/ftl/ftl_rq.o 00:05:07.980 CC lib/ftl/ftl_reloc.o 00:05:07.980 CC lib/ftl/ftl_l2p_cache.o 00:05:08.238 CC lib/iscsi/conn.o 00:05:08.533 CC lib/iscsi/init_grp.o 00:05:08.533 CC lib/nvmf/stubs.o 00:05:08.533 CC lib/nvmf/mdns_server.o 00:05:08.825 CC lib/nvmf/rdma.o 00:05:08.825 CC lib/nvmf/auth.o 00:05:08.825 CC lib/ftl/ftl_p2l.o 00:05:08.825 CC lib/vhost/vhost.o 00:05:09.084 CC lib/vhost/vhost_rpc.o 00:05:09.084 CC lib/iscsi/iscsi.o 00:05:09.084 CC lib/ftl/ftl_p2l_log.o 00:05:09.084 CC lib/vhost/vhost_scsi.o 00:05:09.084 CC lib/iscsi/param.o 00:05:09.342 CC lib/iscsi/portal_grp.o 00:05:09.601 CC lib/ftl/mngt/ftl_mngt.o 00:05:09.601 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:09.601 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:09.858 CC lib/vhost/vhost_blk.o 00:05:09.858 CC lib/iscsi/tgt_node.o 00:05:09.858 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:09.858 CC lib/vhost/rte_vhost_user.o 00:05:09.858 CC lib/iscsi/iscsi_subsystem.o 00:05:10.117 CC lib/iscsi/iscsi_rpc.o 00:05:10.117 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:10.117 CC lib/iscsi/task.o 00:05:10.117 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:10.375 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:10.375 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:10.375 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:10.375 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:10.633 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:10.633 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:10.633 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:10.633 CC lib/ftl/utils/ftl_conf.o 00:05:10.633 CC lib/ftl/utils/ftl_md.o 00:05:10.633 CC lib/ftl/utils/ftl_mempool.o 00:05:10.633 CC lib/ftl/utils/ftl_bitmap.o 00:05:10.891 CC lib/ftl/utils/ftl_property.o 00:05:10.891 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:10.891 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:10.891 LIB libspdk_iscsi.a 00:05:10.891 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:10.891 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:10.891 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:10.891 SO libspdk_iscsi.so.8.0 00:05:11.150 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:11.150 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:11.150 LIB libspdk_vhost.a 00:05:11.150 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:11.150 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:11.150 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:11.150 SYMLINK libspdk_iscsi.so 00:05:11.150 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:11.150 SO libspdk_vhost.so.8.0 00:05:11.150 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:11.150 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:11.150 CC lib/ftl/base/ftl_base_dev.o 00:05:11.150 CC lib/ftl/base/ftl_base_bdev.o 00:05:11.410 SYMLINK libspdk_vhost.so 00:05:11.410 CC lib/ftl/ftl_trace.o 00:05:11.671 LIB libspdk_ftl.a 00:05:11.671 LIB libspdk_nvmf.a 00:05:11.671 SO libspdk_nvmf.so.20.0 00:05:11.930 SO libspdk_ftl.so.9.0 00:05:11.930 SYMLINK libspdk_nvmf.so 00:05:12.190 SYMLINK libspdk_ftl.so 00:05:12.450 CC module/env_dpdk/env_dpdk_rpc.o 00:05:12.709 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:12.709 CC module/keyring/file/keyring.o 00:05:12.709 CC module/scheduler/gscheduler/gscheduler.o 00:05:12.709 CC module/fsdev/aio/fsdev_aio.o 00:05:12.709 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:12.709 CC module/sock/posix/posix.o 00:05:12.709 CC module/accel/error/accel_error.o 00:05:12.709 CC module/blob/bdev/blob_bdev.o 00:05:12.709 CC module/keyring/linux/keyring.o 00:05:12.709 LIB libspdk_env_dpdk_rpc.a 00:05:12.709 SO libspdk_env_dpdk_rpc.so.6.0 00:05:12.710 CC module/keyring/file/keyring_rpc.o 00:05:12.710 SYMLINK libspdk_env_dpdk_rpc.so 00:05:12.710 LIB libspdk_scheduler_gscheduler.a 00:05:12.710 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:12.710 LIB libspdk_scheduler_dpdk_governor.a 00:05:12.710 CC module/keyring/linux/keyring_rpc.o 00:05:12.710 SO libspdk_scheduler_gscheduler.so.4.0 00:05:12.710 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:12.710 LIB libspdk_scheduler_dynamic.a 00:05:12.968 CC module/accel/error/accel_error_rpc.o 00:05:12.968 SO libspdk_scheduler_dynamic.so.4.0 00:05:12.968 SYMLINK libspdk_scheduler_gscheduler.so 00:05:12.968 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:12.968 CC module/fsdev/aio/linux_aio_mgr.o 00:05:12.968 SYMLINK libspdk_scheduler_dynamic.so 00:05:12.968 LIB libspdk_keyring_file.a 00:05:12.968 LIB libspdk_keyring_linux.a 00:05:12.968 SO libspdk_keyring_file.so.2.0 00:05:12.968 LIB libspdk_blob_bdev.a 00:05:12.968 SO libspdk_keyring_linux.so.1.0 00:05:12.968 SO libspdk_blob_bdev.so.12.0 00:05:12.968 LIB libspdk_accel_error.a 00:05:12.968 SYMLINK libspdk_keyring_file.so 00:05:12.968 SYMLINK libspdk_keyring_linux.so 00:05:12.968 SYMLINK libspdk_blob_bdev.so 00:05:12.968 CC module/accel/ioat/accel_ioat.o 00:05:12.968 CC module/accel/ioat/accel_ioat_rpc.o 00:05:12.968 SO libspdk_accel_error.so.2.0 00:05:12.968 CC module/accel/dsa/accel_dsa.o 00:05:12.968 CC module/accel/iaa/accel_iaa.o 00:05:13.226 CC module/accel/iaa/accel_iaa_rpc.o 00:05:13.226 SYMLINK libspdk_accel_error.so 00:05:13.226 CC module/accel/dsa/accel_dsa_rpc.o 00:05:13.226 LIB libspdk_accel_ioat.a 00:05:13.226 SO libspdk_accel_ioat.so.6.0 00:05:13.226 CC module/blobfs/bdev/blobfs_bdev.o 00:05:13.485 LIB libspdk_accel_iaa.a 00:05:13.485 CC module/bdev/error/vbdev_error.o 00:05:13.485 CC module/bdev/delay/vbdev_delay.o 00:05:13.485 SO libspdk_accel_iaa.so.3.0 00:05:13.485 SYMLINK libspdk_accel_ioat.so 00:05:13.485 LIB libspdk_accel_dsa.a 00:05:13.485 LIB libspdk_fsdev_aio.a 00:05:13.485 SYMLINK libspdk_accel_iaa.so 00:05:13.485 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:13.485 SO libspdk_accel_dsa.so.5.0 00:05:13.485 CC module/bdev/gpt/gpt.o 00:05:13.485 CC module/bdev/lvol/vbdev_lvol.o 00:05:13.485 SO libspdk_fsdev_aio.so.1.0 00:05:13.485 SYMLINK libspdk_accel_dsa.so 00:05:13.485 LIB libspdk_sock_posix.a 00:05:13.485 CC module/bdev/malloc/bdev_malloc.o 00:05:13.485 SYMLINK libspdk_fsdev_aio.so 00:05:13.485 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:13.485 SO libspdk_sock_posix.so.6.0 00:05:13.744 LIB libspdk_blobfs_bdev.a 00:05:13.744 CC module/bdev/error/vbdev_error_rpc.o 00:05:13.744 CC module/bdev/gpt/vbdev_gpt.o 00:05:13.744 SO libspdk_blobfs_bdev.so.6.0 00:05:13.744 SYMLINK libspdk_sock_posix.so 00:05:13.744 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:13.744 CC module/bdev/null/bdev_null.o 00:05:13.744 CC module/bdev/nvme/bdev_nvme.o 00:05:13.744 SYMLINK libspdk_blobfs_bdev.so 00:05:13.744 CC module/bdev/null/bdev_null_rpc.o 00:05:13.744 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:13.744 LIB libspdk_bdev_error.a 00:05:14.002 SO libspdk_bdev_error.so.6.0 00:05:14.002 LIB libspdk_bdev_delay.a 00:05:14.002 SO libspdk_bdev_delay.so.6.0 00:05:14.002 SYMLINK libspdk_bdev_error.so 00:05:14.002 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:14.002 CC module/bdev/nvme/nvme_rpc.o 00:05:14.002 SYMLINK libspdk_bdev_delay.so 00:05:14.002 LIB libspdk_bdev_gpt.a 00:05:14.002 LIB libspdk_bdev_null.a 00:05:14.002 LIB libspdk_bdev_malloc.a 00:05:14.002 SO libspdk_bdev_gpt.so.6.0 00:05:14.002 SO libspdk_bdev_malloc.so.6.0 00:05:14.002 SO libspdk_bdev_null.so.6.0 00:05:14.002 SYMLINK libspdk_bdev_gpt.so 00:05:14.002 CC module/bdev/passthru/vbdev_passthru.o 00:05:14.002 LIB libspdk_bdev_lvol.a 00:05:14.002 SYMLINK libspdk_bdev_null.so 00:05:14.002 SYMLINK libspdk_bdev_malloc.so 00:05:14.002 CC module/bdev/nvme/bdev_mdns_client.o 00:05:14.300 SO libspdk_bdev_lvol.so.6.0 00:05:14.300 CC module/bdev/raid/bdev_raid.o 00:05:14.300 CC module/bdev/split/vbdev_split.o 00:05:14.300 CC module/bdev/split/vbdev_split_rpc.o 00:05:14.300 SYMLINK libspdk_bdev_lvol.so 00:05:14.300 CC module/bdev/raid/bdev_raid_rpc.o 00:05:14.300 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:14.300 CC module/bdev/nvme/vbdev_opal.o 00:05:14.300 CC module/bdev/aio/bdev_aio.o 00:05:14.557 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:14.557 LIB libspdk_bdev_split.a 00:05:14.557 SO libspdk_bdev_split.so.6.0 00:05:14.557 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:14.557 SYMLINK libspdk_bdev_split.so 00:05:14.557 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:14.557 CC module/bdev/raid/bdev_raid_sb.o 00:05:14.557 CC module/bdev/raid/raid0.o 00:05:14.557 LIB libspdk_bdev_passthru.a 00:05:14.815 CC module/bdev/ftl/bdev_ftl.o 00:05:14.815 SO libspdk_bdev_passthru.so.6.0 00:05:14.815 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:14.815 CC module/bdev/aio/bdev_aio_rpc.o 00:05:14.815 SYMLINK libspdk_bdev_passthru.so 00:05:14.815 CC module/bdev/raid/raid1.o 00:05:14.815 CC module/bdev/raid/concat.o 00:05:14.815 LIB libspdk_bdev_zone_block.a 00:05:14.815 CC module/bdev/iscsi/bdev_iscsi.o 00:05:14.815 SO libspdk_bdev_zone_block.so.6.0 00:05:15.074 LIB libspdk_bdev_aio.a 00:05:15.074 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:15.074 SO libspdk_bdev_aio.so.6.0 00:05:15.074 SYMLINK libspdk_bdev_zone_block.so 00:05:15.074 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:15.074 CC module/bdev/raid/raid5f.o 00:05:15.074 SYMLINK libspdk_bdev_aio.so 00:05:15.333 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:15.333 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:15.333 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:15.333 LIB libspdk_bdev_ftl.a 00:05:15.333 SO libspdk_bdev_ftl.so.6.0 00:05:15.333 LIB libspdk_bdev_iscsi.a 00:05:15.333 SYMLINK libspdk_bdev_ftl.so 00:05:15.333 SO libspdk_bdev_iscsi.so.6.0 00:05:15.593 SYMLINK libspdk_bdev_iscsi.so 00:05:15.593 LIB libspdk_bdev_raid.a 00:05:15.852 SO libspdk_bdev_raid.so.6.0 00:05:15.852 SYMLINK libspdk_bdev_raid.so 00:05:15.852 LIB libspdk_bdev_virtio.a 00:05:15.852 SO libspdk_bdev_virtio.so.6.0 00:05:16.112 SYMLINK libspdk_bdev_virtio.so 00:05:17.050 LIB libspdk_bdev_nvme.a 00:05:17.309 SO libspdk_bdev_nvme.so.7.1 00:05:17.309 SYMLINK libspdk_bdev_nvme.so 00:05:17.876 CC module/event/subsystems/scheduler/scheduler.o 00:05:17.876 CC module/event/subsystems/fsdev/fsdev.o 00:05:17.876 CC module/event/subsystems/iobuf/iobuf.o 00:05:17.876 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:17.876 CC module/event/subsystems/sock/sock.o 00:05:17.876 CC module/event/subsystems/keyring/keyring.o 00:05:17.876 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:17.876 CC module/event/subsystems/vmd/vmd.o 00:05:17.876 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:18.177 LIB libspdk_event_fsdev.a 00:05:18.177 LIB libspdk_event_vhost_blk.a 00:05:18.177 LIB libspdk_event_keyring.a 00:05:18.177 LIB libspdk_event_vmd.a 00:05:18.177 LIB libspdk_event_sock.a 00:05:18.177 SO libspdk_event_fsdev.so.1.0 00:05:18.177 SO libspdk_event_vhost_blk.so.3.0 00:05:18.177 SO libspdk_event_keyring.so.1.0 00:05:18.177 SO libspdk_event_sock.so.5.0 00:05:18.177 SO libspdk_event_vmd.so.6.0 00:05:18.177 LIB libspdk_event_scheduler.a 00:05:18.177 SO libspdk_event_scheduler.so.4.0 00:05:18.177 SYMLINK libspdk_event_fsdev.so 00:05:18.177 SYMLINK libspdk_event_vhost_blk.so 00:05:18.177 LIB libspdk_event_iobuf.a 00:05:18.177 SYMLINK libspdk_event_keyring.so 00:05:18.177 SYMLINK libspdk_event_sock.so 00:05:18.177 SYMLINK libspdk_event_vmd.so 00:05:18.177 SO libspdk_event_iobuf.so.3.0 00:05:18.177 SYMLINK libspdk_event_scheduler.so 00:05:18.177 SYMLINK libspdk_event_iobuf.so 00:05:18.744 CC module/event/subsystems/accel/accel.o 00:05:18.744 LIB libspdk_event_accel.a 00:05:18.744 SO libspdk_event_accel.so.6.0 00:05:19.002 SYMLINK libspdk_event_accel.so 00:05:19.261 CC module/event/subsystems/bdev/bdev.o 00:05:19.520 LIB libspdk_event_bdev.a 00:05:19.520 SO libspdk_event_bdev.so.6.0 00:05:19.520 SYMLINK libspdk_event_bdev.so 00:05:19.779 CC module/event/subsystems/ublk/ublk.o 00:05:19.779 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:19.779 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:19.779 CC module/event/subsystems/nbd/nbd.o 00:05:19.779 CC module/event/subsystems/scsi/scsi.o 00:05:20.037 LIB libspdk_event_ublk.a 00:05:20.037 SO libspdk_event_ublk.so.3.0 00:05:20.037 LIB libspdk_event_nbd.a 00:05:20.037 LIB libspdk_event_scsi.a 00:05:20.037 SO libspdk_event_nbd.so.6.0 00:05:20.037 SYMLINK libspdk_event_ublk.so 00:05:20.037 SO libspdk_event_scsi.so.6.0 00:05:20.037 SYMLINK libspdk_event_nbd.so 00:05:20.037 LIB libspdk_event_nvmf.a 00:05:20.297 SYMLINK libspdk_event_scsi.so 00:05:20.297 SO libspdk_event_nvmf.so.6.0 00:05:20.297 SYMLINK libspdk_event_nvmf.so 00:05:20.555 CC module/event/subsystems/iscsi/iscsi.o 00:05:20.555 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:20.814 LIB libspdk_event_vhost_scsi.a 00:05:20.814 LIB libspdk_event_iscsi.a 00:05:20.814 SO libspdk_event_vhost_scsi.so.3.0 00:05:20.814 SO libspdk_event_iscsi.so.6.0 00:05:20.814 SYMLINK libspdk_event_vhost_scsi.so 00:05:20.814 SYMLINK libspdk_event_iscsi.so 00:05:21.073 SO libspdk.so.6.0 00:05:21.073 SYMLINK libspdk.so 00:05:21.333 TEST_HEADER include/spdk/accel.h 00:05:21.333 TEST_HEADER include/spdk/accel_module.h 00:05:21.333 TEST_HEADER include/spdk/assert.h 00:05:21.333 TEST_HEADER include/spdk/barrier.h 00:05:21.333 TEST_HEADER include/spdk/base64.h 00:05:21.333 TEST_HEADER include/spdk/bdev.h 00:05:21.333 TEST_HEADER include/spdk/bdev_module.h 00:05:21.333 TEST_HEADER include/spdk/bdev_zone.h 00:05:21.333 TEST_HEADER include/spdk/bit_array.h 00:05:21.333 CC app/trace_record/trace_record.o 00:05:21.333 TEST_HEADER include/spdk/bit_pool.h 00:05:21.333 CXX app/trace/trace.o 00:05:21.333 CC test/rpc_client/rpc_client_test.o 00:05:21.333 TEST_HEADER include/spdk/blob_bdev.h 00:05:21.333 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:21.333 TEST_HEADER include/spdk/blobfs.h 00:05:21.333 TEST_HEADER include/spdk/blob.h 00:05:21.333 TEST_HEADER include/spdk/conf.h 00:05:21.333 TEST_HEADER include/spdk/config.h 00:05:21.333 TEST_HEADER include/spdk/cpuset.h 00:05:21.333 TEST_HEADER include/spdk/crc16.h 00:05:21.333 TEST_HEADER include/spdk/crc32.h 00:05:21.333 TEST_HEADER include/spdk/crc64.h 00:05:21.333 TEST_HEADER include/spdk/dif.h 00:05:21.333 TEST_HEADER include/spdk/dma.h 00:05:21.333 TEST_HEADER include/spdk/endian.h 00:05:21.333 TEST_HEADER include/spdk/env_dpdk.h 00:05:21.333 TEST_HEADER include/spdk/env.h 00:05:21.333 TEST_HEADER include/spdk/event.h 00:05:21.333 TEST_HEADER include/spdk/fd_group.h 00:05:21.333 TEST_HEADER include/spdk/fd.h 00:05:21.333 TEST_HEADER include/spdk/file.h 00:05:21.333 TEST_HEADER include/spdk/fsdev.h 00:05:21.333 TEST_HEADER include/spdk/fsdev_module.h 00:05:21.333 TEST_HEADER include/spdk/ftl.h 00:05:21.592 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:21.592 TEST_HEADER include/spdk/gpt_spec.h 00:05:21.592 TEST_HEADER include/spdk/hexlify.h 00:05:21.592 TEST_HEADER include/spdk/histogram_data.h 00:05:21.592 TEST_HEADER include/spdk/idxd.h 00:05:21.592 CC test/thread/poller_perf/poller_perf.o 00:05:21.592 TEST_HEADER include/spdk/idxd_spec.h 00:05:21.592 TEST_HEADER include/spdk/init.h 00:05:21.592 TEST_HEADER include/spdk/ioat.h 00:05:21.592 TEST_HEADER include/spdk/ioat_spec.h 00:05:21.592 TEST_HEADER include/spdk/iscsi_spec.h 00:05:21.592 CC examples/util/zipf/zipf.o 00:05:21.592 TEST_HEADER include/spdk/json.h 00:05:21.592 TEST_HEADER include/spdk/jsonrpc.h 00:05:21.592 TEST_HEADER include/spdk/keyring.h 00:05:21.592 TEST_HEADER include/spdk/keyring_module.h 00:05:21.592 TEST_HEADER include/spdk/likely.h 00:05:21.592 TEST_HEADER include/spdk/log.h 00:05:21.592 TEST_HEADER include/spdk/lvol.h 00:05:21.592 TEST_HEADER include/spdk/md5.h 00:05:21.592 TEST_HEADER include/spdk/memory.h 00:05:21.592 TEST_HEADER include/spdk/mmio.h 00:05:21.592 TEST_HEADER include/spdk/nbd.h 00:05:21.592 TEST_HEADER include/spdk/net.h 00:05:21.592 CC examples/ioat/perf/perf.o 00:05:21.592 TEST_HEADER include/spdk/notify.h 00:05:21.593 TEST_HEADER include/spdk/nvme.h 00:05:21.593 CC test/dma/test_dma/test_dma.o 00:05:21.593 TEST_HEADER include/spdk/nvme_intel.h 00:05:21.593 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:21.593 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:21.593 TEST_HEADER include/spdk/nvme_spec.h 00:05:21.593 TEST_HEADER include/spdk/nvme_zns.h 00:05:21.593 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:21.593 CC test/app/bdev_svc/bdev_svc.o 00:05:21.593 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:21.593 TEST_HEADER include/spdk/nvmf.h 00:05:21.593 TEST_HEADER include/spdk/nvmf_spec.h 00:05:21.593 TEST_HEADER include/spdk/nvmf_transport.h 00:05:21.593 TEST_HEADER include/spdk/opal.h 00:05:21.593 TEST_HEADER include/spdk/opal_spec.h 00:05:21.593 TEST_HEADER include/spdk/pci_ids.h 00:05:21.593 TEST_HEADER include/spdk/pipe.h 00:05:21.593 TEST_HEADER include/spdk/queue.h 00:05:21.593 TEST_HEADER include/spdk/reduce.h 00:05:21.593 TEST_HEADER include/spdk/rpc.h 00:05:21.593 TEST_HEADER include/spdk/scheduler.h 00:05:21.593 TEST_HEADER include/spdk/scsi.h 00:05:21.593 TEST_HEADER include/spdk/scsi_spec.h 00:05:21.593 TEST_HEADER include/spdk/sock.h 00:05:21.593 TEST_HEADER include/spdk/stdinc.h 00:05:21.593 TEST_HEADER include/spdk/string.h 00:05:21.593 TEST_HEADER include/spdk/thread.h 00:05:21.593 TEST_HEADER include/spdk/trace.h 00:05:21.593 TEST_HEADER include/spdk/trace_parser.h 00:05:21.593 TEST_HEADER include/spdk/tree.h 00:05:21.593 TEST_HEADER include/spdk/ublk.h 00:05:21.593 TEST_HEADER include/spdk/util.h 00:05:21.593 TEST_HEADER include/spdk/uuid.h 00:05:21.593 CC test/env/mem_callbacks/mem_callbacks.o 00:05:21.593 TEST_HEADER include/spdk/version.h 00:05:21.593 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:21.593 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:21.593 TEST_HEADER include/spdk/vhost.h 00:05:21.593 TEST_HEADER include/spdk/vmd.h 00:05:21.593 TEST_HEADER include/spdk/xor.h 00:05:21.593 TEST_HEADER include/spdk/zipf.h 00:05:21.593 LINK rpc_client_test 00:05:21.593 CXX test/cpp_headers/accel.o 00:05:21.593 LINK poller_perf 00:05:21.851 LINK zipf 00:05:21.851 LINK spdk_trace_record 00:05:21.851 LINK bdev_svc 00:05:21.851 LINK ioat_perf 00:05:21.851 CXX test/cpp_headers/accel_module.o 00:05:21.851 LINK spdk_trace 00:05:21.851 CC examples/ioat/verify/verify.o 00:05:22.109 CC app/nvmf_tgt/nvmf_main.o 00:05:22.109 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:22.109 CXX test/cpp_headers/assert.o 00:05:22.109 LINK test_dma 00:05:22.109 CC examples/sock/hello_world/hello_sock.o 00:05:22.109 CC examples/thread/thread/thread_ex.o 00:05:22.109 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:22.109 LINK verify 00:05:22.109 LINK mem_callbacks 00:05:22.368 LINK nvmf_tgt 00:05:22.368 CC examples/vmd/lsvmd/lsvmd.o 00:05:22.368 CXX test/cpp_headers/barrier.o 00:05:22.368 LINK interrupt_tgt 00:05:22.368 LINK lsvmd 00:05:22.368 CC examples/vmd/led/led.o 00:05:22.368 CC test/env/vtophys/vtophys.o 00:05:22.368 LINK hello_sock 00:05:22.368 LINK thread 00:05:22.368 CXX test/cpp_headers/base64.o 00:05:22.368 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:22.699 CC test/env/memory/memory_ut.o 00:05:22.699 LINK led 00:05:22.699 CXX test/cpp_headers/bdev.o 00:05:22.699 CXX test/cpp_headers/bdev_module.o 00:05:22.699 LINK vtophys 00:05:22.699 CC app/iscsi_tgt/iscsi_tgt.o 00:05:22.699 LINK nvme_fuzz 00:05:22.699 CXX test/cpp_headers/bdev_zone.o 00:05:22.699 CC test/env/pci/pci_ut.o 00:05:22.699 LINK env_dpdk_post_init 00:05:22.977 CXX test/cpp_headers/bit_array.o 00:05:22.977 LINK iscsi_tgt 00:05:22.977 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:22.977 CC examples/idxd/perf/perf.o 00:05:22.977 CC test/event/event_perf/event_perf.o 00:05:22.977 CC test/nvme/aer/aer.o 00:05:22.977 CC app/spdk_tgt/spdk_tgt.o 00:05:22.977 CXX test/cpp_headers/bit_pool.o 00:05:22.977 CC test/accel/dif/dif.o 00:05:23.237 CXX test/cpp_headers/blob_bdev.o 00:05:23.237 LINK event_perf 00:05:23.237 LINK pci_ut 00:05:23.237 LINK spdk_tgt 00:05:23.237 CXX test/cpp_headers/blobfs_bdev.o 00:05:23.496 LINK idxd_perf 00:05:23.496 LINK aer 00:05:23.496 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:23.496 CC test/event/reactor/reactor.o 00:05:23.496 CXX test/cpp_headers/blobfs.o 00:05:23.496 CC app/spdk_lspci/spdk_lspci.o 00:05:23.496 CC test/event/reactor_perf/reactor_perf.o 00:05:23.496 CXX test/cpp_headers/blob.o 00:05:23.496 LINK reactor 00:05:23.763 CC test/nvme/reset/reset.o 00:05:23.763 LINK spdk_lspci 00:05:23.763 LINK hello_fsdev 00:05:23.763 LINK reactor_perf 00:05:23.763 CXX test/cpp_headers/conf.o 00:05:23.763 CC test/event/app_repeat/app_repeat.o 00:05:23.763 CC test/nvme/sgl/sgl.o 00:05:24.027 CXX test/cpp_headers/config.o 00:05:24.027 LINK memory_ut 00:05:24.027 LINK dif 00:05:24.027 CXX test/cpp_headers/cpuset.o 00:05:24.027 LINK reset 00:05:24.027 CC app/spdk_nvme_perf/perf.o 00:05:24.027 LINK app_repeat 00:05:24.027 CC test/nvme/e2edp/nvme_dp.o 00:05:24.027 CC examples/accel/perf/accel_perf.o 00:05:24.027 CXX test/cpp_headers/crc16.o 00:05:24.284 LINK sgl 00:05:24.284 CC test/nvme/overhead/overhead.o 00:05:24.284 CC test/nvme/err_injection/err_injection.o 00:05:24.284 CXX test/cpp_headers/crc32.o 00:05:24.284 CC test/event/scheduler/scheduler.o 00:05:24.284 LINK nvme_dp 00:05:24.284 CC examples/blob/hello_world/hello_blob.o 00:05:24.541 LINK err_injection 00:05:24.541 CXX test/cpp_headers/crc64.o 00:05:24.541 CXX test/cpp_headers/dif.o 00:05:24.541 CC examples/nvme/hello_world/hello_world.o 00:05:24.541 LINK hello_blob 00:05:24.541 LINK scheduler 00:05:24.541 CXX test/cpp_headers/dma.o 00:05:24.822 LINK overhead 00:05:24.822 LINK accel_perf 00:05:24.822 CC test/nvme/startup/startup.o 00:05:24.822 LINK hello_world 00:05:24.823 CC examples/blob/cli/blobcli.o 00:05:24.823 CXX test/cpp_headers/endian.o 00:05:24.823 CXX test/cpp_headers/env_dpdk.o 00:05:25.080 CC test/nvme/reserve/reserve.o 00:05:25.080 LINK startup 00:05:25.080 LINK spdk_nvme_perf 00:05:25.080 CXX test/cpp_headers/env.o 00:05:25.080 CC test/nvme/simple_copy/simple_copy.o 00:05:25.080 CC test/nvme/connect_stress/connect_stress.o 00:05:25.080 CC examples/nvme/reconnect/reconnect.o 00:05:25.080 CC test/nvme/boot_partition/boot_partition.o 00:05:25.337 LINK reserve 00:05:25.337 CXX test/cpp_headers/event.o 00:05:25.337 LINK iscsi_fuzz 00:05:25.337 CC app/spdk_nvme_identify/identify.o 00:05:25.337 LINK boot_partition 00:05:25.337 LINK simple_copy 00:05:25.337 CXX test/cpp_headers/fd_group.o 00:05:25.337 LINK blobcli 00:05:25.595 CXX test/cpp_headers/fd.o 00:05:25.595 CC examples/bdev/hello_world/hello_bdev.o 00:05:25.595 LINK connect_stress 00:05:25.595 CXX test/cpp_headers/file.o 00:05:25.595 LINK reconnect 00:05:25.595 CXX test/cpp_headers/fsdev.o 00:05:25.595 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:25.595 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:25.595 CXX test/cpp_headers/fsdev_module.o 00:05:25.853 CC test/nvme/compliance/nvme_compliance.o 00:05:25.853 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:25.853 CC app/spdk_nvme_discover/discovery_aer.o 00:05:25.853 CC test/nvme/fused_ordering/fused_ordering.o 00:05:25.853 CXX test/cpp_headers/ftl.o 00:05:25.853 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:25.853 LINK hello_bdev 00:05:26.111 CC test/nvme/fdp/fdp.o 00:05:26.111 LINK doorbell_aers 00:05:26.111 LINK spdk_nvme_discover 00:05:26.111 LINK fused_ordering 00:05:26.111 CXX test/cpp_headers/fuse_dispatcher.o 00:05:26.369 LINK nvme_compliance 00:05:26.369 CXX test/cpp_headers/gpt_spec.o 00:05:26.369 CXX test/cpp_headers/hexlify.o 00:05:26.369 CC test/nvme/cuse/cuse.o 00:05:26.369 LINK fdp 00:05:26.369 LINK vhost_fuzz 00:05:26.369 CXX test/cpp_headers/histogram_data.o 00:05:26.369 LINK spdk_nvme_identify 00:05:26.369 CC examples/bdev/bdevperf/bdevperf.o 00:05:26.369 CC app/spdk_top/spdk_top.o 00:05:26.627 LINK nvme_manage 00:05:26.627 CXX test/cpp_headers/idxd.o 00:05:26.627 CC test/app/histogram_perf/histogram_perf.o 00:05:26.627 CXX test/cpp_headers/idxd_spec.o 00:05:26.627 CC app/vhost/vhost.o 00:05:26.885 CC examples/nvme/arbitration/arbitration.o 00:05:26.885 LINK histogram_perf 00:05:26.885 CXX test/cpp_headers/init.o 00:05:26.885 CC examples/nvme/hotplug/hotplug.o 00:05:26.885 LINK vhost 00:05:26.885 CC test/blobfs/mkfs/mkfs.o 00:05:27.143 CC test/lvol/esnap/esnap.o 00:05:27.143 CC test/app/jsoncat/jsoncat.o 00:05:27.143 CXX test/cpp_headers/ioat.o 00:05:27.143 LINK mkfs 00:05:27.143 LINK arbitration 00:05:27.143 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:27.452 LINK hotplug 00:05:27.452 LINK jsoncat 00:05:27.452 CXX test/cpp_headers/ioat_spec.o 00:05:27.452 LINK cmb_copy 00:05:27.452 CC examples/nvme/abort/abort.o 00:05:27.452 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:27.452 CXX test/cpp_headers/iscsi_spec.o 00:05:27.452 CC test/app/stub/stub.o 00:05:27.712 CC app/spdk_dd/spdk_dd.o 00:05:27.712 LINK spdk_top 00:05:27.712 CXX test/cpp_headers/json.o 00:05:27.712 LINK pmr_persistence 00:05:27.712 LINK stub 00:05:27.712 LINK bdevperf 00:05:27.969 CXX test/cpp_headers/jsonrpc.o 00:05:27.969 CC app/fio/nvme/fio_plugin.o 00:05:27.969 CC app/fio/bdev/fio_plugin.o 00:05:27.969 CXX test/cpp_headers/keyring.o 00:05:27.969 LINK cuse 00:05:27.969 LINK spdk_dd 00:05:27.969 CXX test/cpp_headers/keyring_module.o 00:05:28.225 CXX test/cpp_headers/likely.o 00:05:28.225 CXX test/cpp_headers/log.o 00:05:28.225 CC test/bdev/bdevio/bdevio.o 00:05:28.225 LINK abort 00:05:28.225 CXX test/cpp_headers/lvol.o 00:05:28.225 CXX test/cpp_headers/md5.o 00:05:28.225 CXX test/cpp_headers/memory.o 00:05:28.483 CXX test/cpp_headers/mmio.o 00:05:28.483 CXX test/cpp_headers/nbd.o 00:05:28.483 CXX test/cpp_headers/net.o 00:05:28.483 CXX test/cpp_headers/notify.o 00:05:28.483 CXX test/cpp_headers/nvme.o 00:05:28.483 CXX test/cpp_headers/nvme_intel.o 00:05:28.483 LINK spdk_bdev 00:05:28.483 CXX test/cpp_headers/nvme_ocssd.o 00:05:28.741 LINK spdk_nvme 00:05:28.741 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:28.741 CXX test/cpp_headers/nvme_spec.o 00:05:28.741 CXX test/cpp_headers/nvme_zns.o 00:05:28.741 CXX test/cpp_headers/nvmf_cmd.o 00:05:28.741 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:28.741 CXX test/cpp_headers/nvmf.o 00:05:28.741 CC examples/nvmf/nvmf/nvmf.o 00:05:28.741 CXX test/cpp_headers/nvmf_spec.o 00:05:28.999 CXX test/cpp_headers/nvmf_transport.o 00:05:28.999 LINK bdevio 00:05:28.999 CXX test/cpp_headers/opal.o 00:05:28.999 CXX test/cpp_headers/opal_spec.o 00:05:28.999 CXX test/cpp_headers/pci_ids.o 00:05:28.999 CXX test/cpp_headers/pipe.o 00:05:28.999 CXX test/cpp_headers/queue.o 00:05:28.999 CXX test/cpp_headers/reduce.o 00:05:28.999 CXX test/cpp_headers/rpc.o 00:05:29.257 CXX test/cpp_headers/scheduler.o 00:05:29.257 CXX test/cpp_headers/scsi.o 00:05:29.257 CXX test/cpp_headers/scsi_spec.o 00:05:29.257 CXX test/cpp_headers/sock.o 00:05:29.257 LINK nvmf 00:05:29.257 CXX test/cpp_headers/stdinc.o 00:05:29.257 CXX test/cpp_headers/string.o 00:05:29.257 CXX test/cpp_headers/thread.o 00:05:29.257 CXX test/cpp_headers/trace.o 00:05:29.257 CXX test/cpp_headers/trace_parser.o 00:05:29.257 CXX test/cpp_headers/tree.o 00:05:29.257 CXX test/cpp_headers/ublk.o 00:05:29.257 CXX test/cpp_headers/util.o 00:05:29.257 CXX test/cpp_headers/uuid.o 00:05:29.516 CXX test/cpp_headers/version.o 00:05:29.516 CXX test/cpp_headers/vfio_user_pci.o 00:05:29.516 CXX test/cpp_headers/vfio_user_spec.o 00:05:29.516 CXX test/cpp_headers/vhost.o 00:05:29.516 CXX test/cpp_headers/vmd.o 00:05:29.516 CXX test/cpp_headers/xor.o 00:05:29.516 CXX test/cpp_headers/zipf.o 00:05:34.784 LINK esnap 00:05:34.784 00:05:34.784 real 1m36.318s 00:05:34.784 user 8m46.511s 00:05:34.784 sys 1m42.243s 00:05:34.784 19:58:35 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:34.784 19:58:35 make -- common/autotest_common.sh@10 -- $ set +x 00:05:34.784 ************************************ 00:05:34.784 END TEST make 00:05:34.784 ************************************ 00:05:34.784 19:58:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:34.784 19:58:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:34.784 19:58:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:34.784 19:58:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:34.784 19:58:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:34.784 19:58:35 -- pm/common@44 -- $ pid=5484 00:05:34.784 19:58:35 -- pm/common@50 -- $ kill -TERM 5484 00:05:34.784 19:58:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:34.784 19:58:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:34.784 19:58:35 -- pm/common@44 -- $ pid=5485 00:05:34.784 19:58:35 -- pm/common@50 -- $ kill -TERM 5485 00:05:34.784 19:58:35 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:34.784 19:58:35 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:34.784 19:58:35 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:34.784 19:58:35 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:34.784 19:58:35 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:34.784 19:58:35 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:34.784 19:58:35 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.784 19:58:35 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.784 19:58:35 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.784 19:58:35 -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.784 19:58:35 -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.784 19:58:35 -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.785 19:58:35 -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.785 19:58:35 -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.785 19:58:35 -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.785 19:58:35 -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.785 19:58:35 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.785 19:58:35 -- scripts/common.sh@344 -- # case "$op" in 00:05:34.785 19:58:35 -- scripts/common.sh@345 -- # : 1 00:05:34.785 19:58:35 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.785 19:58:35 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.785 19:58:35 -- scripts/common.sh@365 -- # decimal 1 00:05:34.785 19:58:35 -- scripts/common.sh@353 -- # local d=1 00:05:34.785 19:58:35 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.785 19:58:35 -- scripts/common.sh@355 -- # echo 1 00:05:34.785 19:58:35 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.785 19:58:35 -- scripts/common.sh@366 -- # decimal 2 00:05:34.785 19:58:35 -- scripts/common.sh@353 -- # local d=2 00:05:34.785 19:58:35 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.785 19:58:35 -- scripts/common.sh@355 -- # echo 2 00:05:34.785 19:58:35 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.785 19:58:35 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.785 19:58:35 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.785 19:58:35 -- scripts/common.sh@368 -- # return 0 00:05:34.785 19:58:35 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.785 19:58:35 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:34.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.785 --rc genhtml_branch_coverage=1 00:05:34.785 --rc genhtml_function_coverage=1 00:05:34.785 --rc genhtml_legend=1 00:05:34.785 --rc geninfo_all_blocks=1 00:05:34.785 --rc geninfo_unexecuted_blocks=1 00:05:34.785 00:05:34.785 ' 00:05:34.785 19:58:35 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:34.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.785 --rc genhtml_branch_coverage=1 00:05:34.785 --rc genhtml_function_coverage=1 00:05:34.785 --rc genhtml_legend=1 00:05:34.785 --rc geninfo_all_blocks=1 00:05:34.785 --rc geninfo_unexecuted_blocks=1 00:05:34.785 00:05:34.785 ' 00:05:34.785 19:58:35 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:34.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.785 --rc genhtml_branch_coverage=1 00:05:34.785 --rc genhtml_function_coverage=1 00:05:34.785 --rc genhtml_legend=1 00:05:34.785 --rc geninfo_all_blocks=1 00:05:34.785 --rc geninfo_unexecuted_blocks=1 00:05:34.785 00:05:34.785 ' 00:05:34.785 19:58:35 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:34.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.785 --rc genhtml_branch_coverage=1 00:05:34.785 --rc genhtml_function_coverage=1 00:05:34.785 --rc genhtml_legend=1 00:05:34.785 --rc geninfo_all_blocks=1 00:05:34.785 --rc geninfo_unexecuted_blocks=1 00:05:34.785 00:05:34.785 ' 00:05:34.785 19:58:35 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:34.785 19:58:35 -- nvmf/common.sh@7 -- # uname -s 00:05:34.785 19:58:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.785 19:58:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.785 19:58:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.785 19:58:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.785 19:58:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.785 19:58:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.785 19:58:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.785 19:58:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.785 19:58:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.785 19:58:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.785 19:58:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a1227725-a2f3-4c37-9707-dd6ea6fa1adb 00:05:34.785 19:58:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=a1227725-a2f3-4c37-9707-dd6ea6fa1adb 00:05:34.785 19:58:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.785 19:58:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.785 19:58:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.785 19:58:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.785 19:58:35 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:34.785 19:58:35 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:34.785 19:58:35 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.785 19:58:35 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.785 19:58:35 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.785 19:58:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.785 19:58:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.785 19:58:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.785 19:58:35 -- paths/export.sh@5 -- # export PATH 00:05:34.785 19:58:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.785 19:58:35 -- nvmf/common.sh@51 -- # : 0 00:05:34.785 19:58:35 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:34.785 19:58:35 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:34.785 19:58:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.785 19:58:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.785 19:58:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.785 19:58:36 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:34.785 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:34.785 19:58:36 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:34.785 19:58:36 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:34.785 19:58:36 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:34.785 19:58:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:34.785 19:58:36 -- spdk/autotest.sh@32 -- # uname -s 00:05:34.785 19:58:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:34.785 19:58:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:34.785 19:58:36 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:34.785 19:58:36 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:34.785 19:58:36 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:34.785 19:58:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:34.785 19:58:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:34.785 19:58:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:34.785 19:58:36 -- spdk/autotest.sh@48 -- # udevadm_pid=54565 00:05:34.785 19:58:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:34.785 19:58:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:34.785 19:58:36 -- pm/common@17 -- # local monitor 00:05:34.785 19:58:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:34.785 19:58:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:34.785 19:58:36 -- pm/common@25 -- # sleep 1 00:05:34.785 19:58:36 -- pm/common@21 -- # date +%s 00:05:34.786 19:58:36 -- pm/common@21 -- # date +%s 00:05:34.786 19:58:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733428716 00:05:34.786 19:58:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733428716 00:05:34.786 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733428716_collect-cpu-load.pm.log 00:05:34.786 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733428716_collect-vmstat.pm.log 00:05:35.726 19:58:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:35.726 19:58:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:35.726 19:58:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.726 19:58:37 -- common/autotest_common.sh@10 -- # set +x 00:05:35.726 19:58:37 -- spdk/autotest.sh@59 -- # create_test_list 00:05:35.726 19:58:37 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:35.726 19:58:37 -- common/autotest_common.sh@10 -- # set +x 00:05:35.726 19:58:37 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:35.726 19:58:37 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:35.726 19:58:37 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:35.726 19:58:37 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:35.726 19:58:37 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:35.726 19:58:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:35.726 19:58:37 -- common/autotest_common.sh@1457 -- # uname 00:05:35.726 19:58:37 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:35.726 19:58:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:35.726 19:58:37 -- common/autotest_common.sh@1477 -- # uname 00:05:35.726 19:58:37 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:35.726 19:58:37 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:35.726 19:58:37 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:35.984 lcov: LCOV version 1.15 00:05:35.984 19:58:37 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:50.865 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:50.865 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:08.986 19:59:07 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:08.986 19:59:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.986 19:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:08.986 19:59:07 -- spdk/autotest.sh@78 -- # rm -f 00:06:08.986 19:59:07 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:08.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:08.986 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:08.986 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:08.986 19:59:08 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:08.986 19:59:08 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:08.986 19:59:08 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:08.986 19:59:08 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:08.986 19:59:08 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:08.986 19:59:08 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:08.986 19:59:08 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:08.986 19:59:08 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:08.986 19:59:08 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:08.986 19:59:08 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:08.986 19:59:08 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:08.986 19:59:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:08.986 19:59:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:08.986 19:59:08 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:08.986 19:59:08 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:08.986 19:59:08 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:08.986 19:59:08 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:08.986 19:59:08 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:08.986 19:59:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:08.986 19:59:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:08.986 19:59:08 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:08.986 19:59:08 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:06:08.986 19:59:08 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:08.986 19:59:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:08.986 19:59:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:08.986 19:59:08 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:08.986 19:59:08 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:06:08.986 19:59:08 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:08.986 19:59:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:08.986 19:59:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:08.986 19:59:08 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:08.987 19:59:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:08.987 19:59:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:08.987 19:59:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:08.987 19:59:08 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:08.987 19:59:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:08.987 No valid GPT data, bailing 00:06:08.987 19:59:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:08.987 19:59:08 -- scripts/common.sh@394 -- # pt= 00:06:08.987 19:59:08 -- scripts/common.sh@395 -- # return 1 00:06:08.987 19:59:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:08.987 1+0 records in 00:06:08.987 1+0 records out 00:06:08.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00437636 s, 240 MB/s 00:06:08.987 19:59:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:08.987 19:59:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:08.987 19:59:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:08.987 19:59:08 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:08.987 19:59:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:08.987 No valid GPT data, bailing 00:06:08.987 19:59:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:08.987 19:59:08 -- scripts/common.sh@394 -- # pt= 00:06:08.987 19:59:08 -- scripts/common.sh@395 -- # return 1 00:06:08.987 19:59:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:08.987 1+0 records in 00:06:08.987 1+0 records out 00:06:08.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00680866 s, 154 MB/s 00:06:08.987 19:59:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:08.987 19:59:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:08.987 19:59:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:08.987 19:59:08 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:08.987 19:59:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:08.987 No valid GPT data, bailing 00:06:08.987 19:59:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:08.987 19:59:09 -- scripts/common.sh@394 -- # pt= 00:06:08.987 19:59:09 -- scripts/common.sh@395 -- # return 1 00:06:08.987 19:59:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:08.987 1+0 records in 00:06:08.987 1+0 records out 00:06:08.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00583331 s, 180 MB/s 00:06:08.987 19:59:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:08.987 19:59:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:08.987 19:59:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:08.987 19:59:09 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:08.987 19:59:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:08.987 No valid GPT data, bailing 00:06:08.987 19:59:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:08.987 19:59:09 -- scripts/common.sh@394 -- # pt= 00:06:08.987 19:59:09 -- scripts/common.sh@395 -- # return 1 00:06:08.987 19:59:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:08.987 1+0 records in 00:06:08.987 1+0 records out 00:06:08.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431634 s, 243 MB/s 00:06:08.987 19:59:09 -- spdk/autotest.sh@105 -- # sync 00:06:08.987 19:59:09 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:08.987 19:59:09 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:08.987 19:59:09 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:10.887 19:59:12 -- spdk/autotest.sh@111 -- # uname -s 00:06:10.887 19:59:12 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:10.887 19:59:12 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:10.888 19:59:12 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:11.454 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:11.455 Hugepages 00:06:11.455 node hugesize free / total 00:06:11.455 node0 1048576kB 0 / 0 00:06:11.455 node0 2048kB 0 / 0 00:06:11.455 00:06:11.455 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:11.713 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:11.713 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:11.713 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:11.713 19:59:13 -- spdk/autotest.sh@117 -- # uname -s 00:06:11.973 19:59:13 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:11.973 19:59:13 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:11.973 19:59:13 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:12.541 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:12.800 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:12.800 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:12.800 19:59:14 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:14.192 19:59:15 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:14.192 19:59:15 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:14.192 19:59:15 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:14.192 19:59:15 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:14.192 19:59:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:14.192 19:59:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:14.192 19:59:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:14.192 19:59:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:14.192 19:59:15 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:14.192 19:59:15 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:14.192 19:59:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:14.193 19:59:15 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:14.451 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:14.451 Waiting for block devices as requested 00:06:14.451 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:14.711 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:14.711 19:59:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:14.711 19:59:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:14.711 19:59:16 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:14.711 19:59:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:14.711 19:59:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:14.711 19:59:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:14.711 19:59:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:14.711 19:59:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:14.711 19:59:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:14.711 19:59:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:14.711 19:59:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:14.711 19:59:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:14.711 19:59:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:14.711 19:59:16 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:14.711 19:59:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:14.711 19:59:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:14.711 19:59:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:14.711 19:59:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:14.711 19:59:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:14.711 19:59:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:14.711 19:59:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:14.711 19:59:16 -- common/autotest_common.sh@1543 -- # continue 00:06:14.711 19:59:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:14.711 19:59:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:14.711 19:59:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:14.711 19:59:16 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:14.711 19:59:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:14.711 19:59:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:14.711 19:59:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:14.711 19:59:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:14.711 19:59:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:14.711 19:59:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:14.711 19:59:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:14.711 19:59:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:14.711 19:59:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:14.711 19:59:16 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:14.711 19:59:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:14.711 19:59:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:14.711 19:59:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:14.711 19:59:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:14.711 19:59:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:14.711 19:59:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:14.711 19:59:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:14.711 19:59:16 -- common/autotest_common.sh@1543 -- # continue 00:06:14.711 19:59:16 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:14.711 19:59:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:14.711 19:59:16 -- common/autotest_common.sh@10 -- # set +x 00:06:14.970 19:59:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:14.970 19:59:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:14.970 19:59:16 -- common/autotest_common.sh@10 -- # set +x 00:06:14.970 19:59:16 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:15.908 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:15.908 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:15.908 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:15.908 19:59:17 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:15.908 19:59:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:15.908 19:59:17 -- common/autotest_common.sh@10 -- # set +x 00:06:15.908 19:59:17 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:15.908 19:59:17 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:15.908 19:59:17 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:15.909 19:59:17 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:15.909 19:59:17 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:15.909 19:59:17 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:15.909 19:59:17 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:15.909 19:59:17 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:15.909 19:59:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:15.909 19:59:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:15.909 19:59:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:15.909 19:59:17 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:15.909 19:59:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:16.167 19:59:17 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:16.167 19:59:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:16.167 19:59:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:16.167 19:59:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:16.167 19:59:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:16.168 19:59:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:16.168 19:59:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:16.168 19:59:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:16.168 19:59:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:16.168 19:59:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:16.168 19:59:17 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:16.168 19:59:17 -- common/autotest_common.sh@1572 -- # return 0 00:06:16.168 19:59:17 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:16.168 19:59:17 -- common/autotest_common.sh@1580 -- # return 0 00:06:16.168 19:59:17 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:16.168 19:59:17 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:16.168 19:59:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:16.168 19:59:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:16.168 19:59:17 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:16.168 19:59:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.168 19:59:17 -- common/autotest_common.sh@10 -- # set +x 00:06:16.168 19:59:17 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:16.168 19:59:17 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:16.168 19:59:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.168 19:59:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.168 19:59:17 -- common/autotest_common.sh@10 -- # set +x 00:06:16.168 ************************************ 00:06:16.168 START TEST env 00:06:16.168 ************************************ 00:06:16.168 19:59:17 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:16.168 * Looking for test storage... 00:06:16.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:16.168 19:59:17 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:16.168 19:59:17 env -- common/autotest_common.sh@1711 -- # lcov --version 00:06:16.168 19:59:17 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:16.168 19:59:17 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:16.168 19:59:17 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.168 19:59:17 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.168 19:59:17 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.168 19:59:17 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.168 19:59:17 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.168 19:59:17 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.168 19:59:17 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.168 19:59:17 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.168 19:59:17 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.168 19:59:17 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.168 19:59:17 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.168 19:59:17 env -- scripts/common.sh@344 -- # case "$op" in 00:06:16.168 19:59:17 env -- scripts/common.sh@345 -- # : 1 00:06:16.168 19:59:17 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.168 19:59:17 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.168 19:59:17 env -- scripts/common.sh@365 -- # decimal 1 00:06:16.168 19:59:17 env -- scripts/common.sh@353 -- # local d=1 00:06:16.168 19:59:17 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.168 19:59:17 env -- scripts/common.sh@355 -- # echo 1 00:06:16.168 19:59:17 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.427 19:59:17 env -- scripts/common.sh@366 -- # decimal 2 00:06:16.427 19:59:17 env -- scripts/common.sh@353 -- # local d=2 00:06:16.427 19:59:17 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.427 19:59:17 env -- scripts/common.sh@355 -- # echo 2 00:06:16.427 19:59:17 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.427 19:59:17 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.427 19:59:17 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.427 19:59:17 env -- scripts/common.sh@368 -- # return 0 00:06:16.427 19:59:17 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.427 19:59:17 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:16.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.427 --rc genhtml_branch_coverage=1 00:06:16.427 --rc genhtml_function_coverage=1 00:06:16.427 --rc genhtml_legend=1 00:06:16.427 --rc geninfo_all_blocks=1 00:06:16.427 --rc geninfo_unexecuted_blocks=1 00:06:16.427 00:06:16.427 ' 00:06:16.427 19:59:17 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:16.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.427 --rc genhtml_branch_coverage=1 00:06:16.427 --rc genhtml_function_coverage=1 00:06:16.427 --rc genhtml_legend=1 00:06:16.427 --rc geninfo_all_blocks=1 00:06:16.427 --rc geninfo_unexecuted_blocks=1 00:06:16.427 00:06:16.427 ' 00:06:16.427 19:59:17 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:16.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.427 --rc genhtml_branch_coverage=1 00:06:16.427 --rc genhtml_function_coverage=1 00:06:16.427 --rc genhtml_legend=1 00:06:16.427 --rc geninfo_all_blocks=1 00:06:16.427 --rc geninfo_unexecuted_blocks=1 00:06:16.427 00:06:16.427 ' 00:06:16.427 19:59:17 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:16.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.427 --rc genhtml_branch_coverage=1 00:06:16.427 --rc genhtml_function_coverage=1 00:06:16.427 --rc genhtml_legend=1 00:06:16.427 --rc geninfo_all_blocks=1 00:06:16.427 --rc geninfo_unexecuted_blocks=1 00:06:16.427 00:06:16.427 ' 00:06:16.427 19:59:17 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:16.427 19:59:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.427 19:59:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.427 19:59:17 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.427 ************************************ 00:06:16.427 START TEST env_memory 00:06:16.427 ************************************ 00:06:16.427 19:59:17 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:16.427 00:06:16.427 00:06:16.427 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.427 http://cunit.sourceforge.net/ 00:06:16.427 00:06:16.427 00:06:16.427 Suite: memory 00:06:16.427 Test: alloc and free memory map ...[2024-12-05 19:59:17.695345] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:16.427 passed 00:06:16.427 Test: mem map translation ...[2024-12-05 19:59:17.739791] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:16.427 [2024-12-05 19:59:17.739843] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:16.427 [2024-12-05 19:59:17.739917] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:16.427 [2024-12-05 19:59:17.739939] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:16.427 passed 00:06:16.427 Test: mem map registration ...[2024-12-05 19:59:17.814850] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:16.427 [2024-12-05 19:59:17.814936] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:16.427 passed 00:06:16.687 Test: mem map adjacent registrations ...passed 00:06:16.687 00:06:16.687 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.687 suites 1 1 n/a 0 0 00:06:16.687 tests 4 4 4 0 0 00:06:16.687 asserts 152 152 152 0 n/a 00:06:16.687 00:06:16.687 Elapsed time = 0.260 seconds 00:06:16.687 00:06:16.687 real 0m0.312s 00:06:16.687 user 0m0.273s 00:06:16.687 sys 0m0.028s 00:06:16.687 19:59:17 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.687 19:59:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:16.687 ************************************ 00:06:16.687 END TEST env_memory 00:06:16.687 ************************************ 00:06:16.687 19:59:17 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:16.687 19:59:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.687 19:59:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.687 19:59:17 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.687 ************************************ 00:06:16.687 START TEST env_vtophys 00:06:16.687 ************************************ 00:06:16.687 19:59:17 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:16.687 EAL: lib.eal log level changed from notice to debug 00:06:16.687 EAL: Detected lcore 0 as core 0 on socket 0 00:06:16.687 EAL: Detected lcore 1 as core 0 on socket 0 00:06:16.687 EAL: Detected lcore 2 as core 0 on socket 0 00:06:16.687 EAL: Detected lcore 3 as core 0 on socket 0 00:06:16.687 EAL: Detected lcore 4 as core 0 on socket 0 00:06:16.687 EAL: Detected lcore 5 as core 0 on socket 0 00:06:16.687 EAL: Detected lcore 6 as core 0 on socket 0 00:06:16.687 EAL: Detected lcore 7 as core 0 on socket 0 00:06:16.687 EAL: Detected lcore 8 as core 0 on socket 0 00:06:16.687 EAL: Detected lcore 9 as core 0 on socket 0 00:06:16.687 EAL: Maximum logical cores by configuration: 128 00:06:16.687 EAL: Detected CPU lcores: 10 00:06:16.687 EAL: Detected NUMA nodes: 1 00:06:16.687 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:16.687 EAL: Detected shared linkage of DPDK 00:06:16.687 EAL: No shared files mode enabled, IPC will be disabled 00:06:16.687 EAL: Selected IOVA mode 'PA' 00:06:16.687 EAL: Probing VFIO support... 00:06:16.687 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:16.687 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:16.687 EAL: Ask a virtual area of 0x2e000 bytes 00:06:16.687 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:16.687 EAL: Setting up physically contiguous memory... 00:06:16.687 EAL: Setting maximum number of open files to 524288 00:06:16.687 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:16.687 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:16.687 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.687 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:16.687 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:16.687 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.687 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:16.687 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:16.687 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.687 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:16.687 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:16.687 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.687 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:16.687 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:16.687 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.687 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:16.687 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:16.687 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.687 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:16.687 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:16.687 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.687 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:16.687 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:16.687 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.687 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:16.687 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:16.687 EAL: Hugepages will be freed exactly as allocated. 00:06:16.687 EAL: No shared files mode enabled, IPC is disabled 00:06:16.687 EAL: No shared files mode enabled, IPC is disabled 00:06:16.945 EAL: TSC frequency is ~2290000 KHz 00:06:16.945 EAL: Main lcore 0 is ready (tid=7fca0cec7a40;cpuset=[0]) 00:06:16.945 EAL: Trying to obtain current memory policy. 00:06:16.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.945 EAL: Restoring previous memory policy: 0 00:06:16.945 EAL: request: mp_malloc_sync 00:06:16.945 EAL: No shared files mode enabled, IPC is disabled 00:06:16.945 EAL: Heap on socket 0 was expanded by 2MB 00:06:16.945 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:16.945 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:16.945 EAL: Mem event callback 'spdk:(nil)' registered 00:06:16.945 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:16.945 00:06:16.945 00:06:16.945 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.945 http://cunit.sourceforge.net/ 00:06:16.945 00:06:16.945 00:06:16.945 Suite: components_suite 00:06:17.208 Test: vtophys_malloc_test ...passed 00:06:17.208 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:17.208 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.208 EAL: Restoring previous memory policy: 4 00:06:17.208 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.208 EAL: request: mp_malloc_sync 00:06:17.208 EAL: No shared files mode enabled, IPC is disabled 00:06:17.208 EAL: Heap on socket 0 was expanded by 4MB 00:06:17.208 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.208 EAL: request: mp_malloc_sync 00:06:17.208 EAL: No shared files mode enabled, IPC is disabled 00:06:17.208 EAL: Heap on socket 0 was shrunk by 4MB 00:06:17.208 EAL: Trying to obtain current memory policy. 00:06:17.208 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.208 EAL: Restoring previous memory policy: 4 00:06:17.208 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.208 EAL: request: mp_malloc_sync 00:06:17.208 EAL: No shared files mode enabled, IPC is disabled 00:06:17.208 EAL: Heap on socket 0 was expanded by 6MB 00:06:17.208 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.208 EAL: request: mp_malloc_sync 00:06:17.208 EAL: No shared files mode enabled, IPC is disabled 00:06:17.208 EAL: Heap on socket 0 was shrunk by 6MB 00:06:17.208 EAL: Trying to obtain current memory policy. 00:06:17.208 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.208 EAL: Restoring previous memory policy: 4 00:06:17.208 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.208 EAL: request: mp_malloc_sync 00:06:17.208 EAL: No shared files mode enabled, IPC is disabled 00:06:17.208 EAL: Heap on socket 0 was expanded by 10MB 00:06:17.208 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.208 EAL: request: mp_malloc_sync 00:06:17.208 EAL: No shared files mode enabled, IPC is disabled 00:06:17.208 EAL: Heap on socket 0 was shrunk by 10MB 00:06:17.208 EAL: Trying to obtain current memory policy. 00:06:17.208 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.208 EAL: Restoring previous memory policy: 4 00:06:17.208 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.208 EAL: request: mp_malloc_sync 00:06:17.208 EAL: No shared files mode enabled, IPC is disabled 00:06:17.208 EAL: Heap on socket 0 was expanded by 18MB 00:06:17.473 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.473 EAL: request: mp_malloc_sync 00:06:17.473 EAL: No shared files mode enabled, IPC is disabled 00:06:17.473 EAL: Heap on socket 0 was shrunk by 18MB 00:06:17.473 EAL: Trying to obtain current memory policy. 00:06:17.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.473 EAL: Restoring previous memory policy: 4 00:06:17.473 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.473 EAL: request: mp_malloc_sync 00:06:17.473 EAL: No shared files mode enabled, IPC is disabled 00:06:17.473 EAL: Heap on socket 0 was expanded by 34MB 00:06:17.473 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.473 EAL: request: mp_malloc_sync 00:06:17.473 EAL: No shared files mode enabled, IPC is disabled 00:06:17.473 EAL: Heap on socket 0 was shrunk by 34MB 00:06:17.473 EAL: Trying to obtain current memory policy. 00:06:17.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.473 EAL: Restoring previous memory policy: 4 00:06:17.473 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.473 EAL: request: mp_malloc_sync 00:06:17.473 EAL: No shared files mode enabled, IPC is disabled 00:06:17.473 EAL: Heap on socket 0 was expanded by 66MB 00:06:17.731 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.731 EAL: request: mp_malloc_sync 00:06:17.731 EAL: No shared files mode enabled, IPC is disabled 00:06:17.732 EAL: Heap on socket 0 was shrunk by 66MB 00:06:17.732 EAL: Trying to obtain current memory policy. 00:06:17.732 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.732 EAL: Restoring previous memory policy: 4 00:06:17.732 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.732 EAL: request: mp_malloc_sync 00:06:17.732 EAL: No shared files mode enabled, IPC is disabled 00:06:17.732 EAL: Heap on socket 0 was expanded by 130MB 00:06:17.991 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.991 EAL: request: mp_malloc_sync 00:06:17.991 EAL: No shared files mode enabled, IPC is disabled 00:06:17.991 EAL: Heap on socket 0 was shrunk by 130MB 00:06:18.250 EAL: Trying to obtain current memory policy. 00:06:18.250 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.250 EAL: Restoring previous memory policy: 4 00:06:18.250 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.250 EAL: request: mp_malloc_sync 00:06:18.250 EAL: No shared files mode enabled, IPC is disabled 00:06:18.250 EAL: Heap on socket 0 was expanded by 258MB 00:06:18.817 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.817 EAL: request: mp_malloc_sync 00:06:18.817 EAL: No shared files mode enabled, IPC is disabled 00:06:18.817 EAL: Heap on socket 0 was shrunk by 258MB 00:06:19.077 EAL: Trying to obtain current memory policy. 00:06:19.077 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.336 EAL: Restoring previous memory policy: 4 00:06:19.336 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.336 EAL: request: mp_malloc_sync 00:06:19.336 EAL: No shared files mode enabled, IPC is disabled 00:06:19.336 EAL: Heap on socket 0 was expanded by 514MB 00:06:20.274 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.274 EAL: request: mp_malloc_sync 00:06:20.274 EAL: No shared files mode enabled, IPC is disabled 00:06:20.274 EAL: Heap on socket 0 was shrunk by 514MB 00:06:21.241 EAL: Trying to obtain current memory policy. 00:06:21.241 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:21.241 EAL: Restoring previous memory policy: 4 00:06:21.241 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.241 EAL: request: mp_malloc_sync 00:06:21.241 EAL: No shared files mode enabled, IPC is disabled 00:06:21.241 EAL: Heap on socket 0 was expanded by 1026MB 00:06:23.161 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.161 EAL: request: mp_malloc_sync 00:06:23.161 EAL: No shared files mode enabled, IPC is disabled 00:06:23.161 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:25.068 passed 00:06:25.068 00:06:25.068 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.068 suites 1 1 n/a 0 0 00:06:25.068 tests 2 2 2 0 0 00:06:25.068 asserts 5425 5425 5425 0 n/a 00:06:25.068 00:06:25.068 Elapsed time = 7.936 seconds 00:06:25.068 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.068 EAL: request: mp_malloc_sync 00:06:25.068 EAL: No shared files mode enabled, IPC is disabled 00:06:25.068 EAL: Heap on socket 0 was shrunk by 2MB 00:06:25.068 EAL: No shared files mode enabled, IPC is disabled 00:06:25.068 EAL: No shared files mode enabled, IPC is disabled 00:06:25.068 EAL: No shared files mode enabled, IPC is disabled 00:06:25.068 00:06:25.068 real 0m8.258s 00:06:25.068 user 0m7.294s 00:06:25.068 sys 0m0.813s 00:06:25.068 19:59:26 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.068 19:59:26 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:25.068 ************************************ 00:06:25.068 END TEST env_vtophys 00:06:25.068 ************************************ 00:06:25.068 19:59:26 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:25.068 19:59:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.068 19:59:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.068 19:59:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.068 ************************************ 00:06:25.068 START TEST env_pci 00:06:25.068 ************************************ 00:06:25.068 19:59:26 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:25.068 00:06:25.068 00:06:25.068 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.068 http://cunit.sourceforge.net/ 00:06:25.068 00:06:25.068 00:06:25.068 Suite: pci 00:06:25.068 Test: pci_hook ...[2024-12-05 19:59:26.363421] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56900 has claimed it 00:06:25.068 passed 00:06:25.068 00:06:25.068 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.068 suites 1 1 n/a 0 0 00:06:25.068 tests 1 1 1 0 0 00:06:25.068 asserts 25 25 25 0 n/a 00:06:25.068 00:06:25.068 Elapsed time = 0.005 seconds 00:06:25.068 EAL: Cannot find device (10000:00:01.0) 00:06:25.068 EAL: Failed to attach device on primary process 00:06:25.068 00:06:25.068 real 0m0.102s 00:06:25.068 user 0m0.045s 00:06:25.068 sys 0m0.057s 00:06:25.068 19:59:26 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.068 19:59:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:25.068 ************************************ 00:06:25.068 END TEST env_pci 00:06:25.068 ************************************ 00:06:25.068 19:59:26 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:25.068 19:59:26 env -- env/env.sh@15 -- # uname 00:06:25.068 19:59:26 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:25.068 19:59:26 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:25.068 19:59:26 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:25.068 19:59:26 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:25.068 19:59:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.068 19:59:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.068 ************************************ 00:06:25.068 START TEST env_dpdk_post_init 00:06:25.068 ************************************ 00:06:25.068 19:59:26 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:25.328 EAL: Detected CPU lcores: 10 00:06:25.328 EAL: Detected NUMA nodes: 1 00:06:25.328 EAL: Detected shared linkage of DPDK 00:06:25.328 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:25.328 EAL: Selected IOVA mode 'PA' 00:06:25.328 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:25.328 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:25.328 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:25.328 Starting DPDK initialization... 00:06:25.328 Starting SPDK post initialization... 00:06:25.328 SPDK NVMe probe 00:06:25.328 Attaching to 0000:00:10.0 00:06:25.328 Attaching to 0000:00:11.0 00:06:25.328 Attached to 0000:00:10.0 00:06:25.328 Attached to 0000:00:11.0 00:06:25.328 Cleaning up... 00:06:25.587 00:06:25.587 real 0m0.275s 00:06:25.587 user 0m0.088s 00:06:25.587 sys 0m0.089s 00:06:25.587 19:59:26 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.587 19:59:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:25.587 ************************************ 00:06:25.587 END TEST env_dpdk_post_init 00:06:25.587 ************************************ 00:06:25.587 19:59:26 env -- env/env.sh@26 -- # uname 00:06:25.587 19:59:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:25.587 19:59:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:25.587 19:59:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.587 19:59:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.587 19:59:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.587 ************************************ 00:06:25.587 START TEST env_mem_callbacks 00:06:25.587 ************************************ 00:06:25.587 19:59:26 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:25.587 EAL: Detected CPU lcores: 10 00:06:25.587 EAL: Detected NUMA nodes: 1 00:06:25.587 EAL: Detected shared linkage of DPDK 00:06:25.587 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:25.587 EAL: Selected IOVA mode 'PA' 00:06:25.847 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:25.847 00:06:25.847 00:06:25.847 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.847 http://cunit.sourceforge.net/ 00:06:25.847 00:06:25.847 00:06:25.847 Suite: memory 00:06:25.847 Test: test ... 00:06:25.847 register 0x200000200000 2097152 00:06:25.847 malloc 3145728 00:06:25.847 register 0x200000400000 4194304 00:06:25.847 buf 0x2000004fffc0 len 3145728 PASSED 00:06:25.847 malloc 64 00:06:25.847 buf 0x2000004ffec0 len 64 PASSED 00:06:25.847 malloc 4194304 00:06:25.847 register 0x200000800000 6291456 00:06:25.847 buf 0x2000009fffc0 len 4194304 PASSED 00:06:25.847 free 0x2000004fffc0 3145728 00:06:25.847 free 0x2000004ffec0 64 00:06:25.847 unregister 0x200000400000 4194304 PASSED 00:06:25.847 free 0x2000009fffc0 4194304 00:06:25.847 unregister 0x200000800000 6291456 PASSED 00:06:25.847 malloc 8388608 00:06:25.847 register 0x200000400000 10485760 00:06:25.847 buf 0x2000005fffc0 len 8388608 PASSED 00:06:25.847 free 0x2000005fffc0 8388608 00:06:25.847 unregister 0x200000400000 10485760 PASSED 00:06:25.847 passed 00:06:25.847 00:06:25.847 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.847 suites 1 1 n/a 0 0 00:06:25.847 tests 1 1 1 0 0 00:06:25.847 asserts 15 15 15 0 n/a 00:06:25.847 00:06:25.847 Elapsed time = 0.085 seconds 00:06:25.847 00:06:25.847 real 0m0.286s 00:06:25.847 user 0m0.107s 00:06:25.847 sys 0m0.077s 00:06:25.847 19:59:27 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.847 ************************************ 00:06:25.847 END TEST env_mem_callbacks 00:06:25.847 ************************************ 00:06:25.847 19:59:27 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:25.847 00:06:25.847 real 0m9.789s 00:06:25.847 user 0m8.032s 00:06:25.847 sys 0m1.419s 00:06:25.847 19:59:27 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.847 19:59:27 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.847 ************************************ 00:06:25.847 END TEST env 00:06:25.847 ************************************ 00:06:25.847 19:59:27 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:25.847 19:59:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.847 19:59:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.847 19:59:27 -- common/autotest_common.sh@10 -- # set +x 00:06:25.847 ************************************ 00:06:25.847 START TEST rpc 00:06:25.847 ************************************ 00:06:25.847 19:59:27 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:26.106 * Looking for test storage... 00:06:26.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:26.106 19:59:27 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.106 19:59:27 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.106 19:59:27 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.106 19:59:27 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.106 19:59:27 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.106 19:59:27 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.106 19:59:27 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.106 19:59:27 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.106 19:59:27 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.106 19:59:27 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.106 19:59:27 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.106 19:59:27 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.106 19:59:27 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.106 19:59:27 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.106 19:59:27 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.106 19:59:27 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:26.106 19:59:27 rpc -- scripts/common.sh@345 -- # : 1 00:06:26.106 19:59:27 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.106 19:59:27 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.106 19:59:27 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:26.106 19:59:27 rpc -- scripts/common.sh@353 -- # local d=1 00:06:26.106 19:59:27 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.106 19:59:27 rpc -- scripts/common.sh@355 -- # echo 1 00:06:26.106 19:59:27 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.106 19:59:27 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:26.106 19:59:27 rpc -- scripts/common.sh@353 -- # local d=2 00:06:26.106 19:59:27 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.106 19:59:27 rpc -- scripts/common.sh@355 -- # echo 2 00:06:26.106 19:59:27 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.106 19:59:27 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.106 19:59:27 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.106 19:59:27 rpc -- scripts/common.sh@368 -- # return 0 00:06:26.106 19:59:27 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.106 19:59:27 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.106 --rc genhtml_branch_coverage=1 00:06:26.106 --rc genhtml_function_coverage=1 00:06:26.107 --rc genhtml_legend=1 00:06:26.107 --rc geninfo_all_blocks=1 00:06:26.107 --rc geninfo_unexecuted_blocks=1 00:06:26.107 00:06:26.107 ' 00:06:26.107 19:59:27 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.107 --rc genhtml_branch_coverage=1 00:06:26.107 --rc genhtml_function_coverage=1 00:06:26.107 --rc genhtml_legend=1 00:06:26.107 --rc geninfo_all_blocks=1 00:06:26.107 --rc geninfo_unexecuted_blocks=1 00:06:26.107 00:06:26.107 ' 00:06:26.107 19:59:27 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.107 --rc genhtml_branch_coverage=1 00:06:26.107 --rc genhtml_function_coverage=1 00:06:26.107 --rc genhtml_legend=1 00:06:26.107 --rc geninfo_all_blocks=1 00:06:26.107 --rc geninfo_unexecuted_blocks=1 00:06:26.107 00:06:26.107 ' 00:06:26.107 19:59:27 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.107 --rc genhtml_branch_coverage=1 00:06:26.107 --rc genhtml_function_coverage=1 00:06:26.107 --rc genhtml_legend=1 00:06:26.107 --rc geninfo_all_blocks=1 00:06:26.107 --rc geninfo_unexecuted_blocks=1 00:06:26.107 00:06:26.107 ' 00:06:26.107 19:59:27 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57027 00:06:26.107 19:59:27 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:26.107 19:59:27 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.107 19:59:27 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57027 00:06:26.107 19:59:27 rpc -- common/autotest_common.sh@835 -- # '[' -z 57027 ']' 00:06:26.107 19:59:27 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.107 19:59:27 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.107 19:59:27 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.107 19:59:27 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.107 19:59:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.366 [2024-12-05 19:59:27.574643] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:26.366 [2024-12-05 19:59:27.574781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57027 ] 00:06:26.366 [2024-12-05 19:59:27.751402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.625 [2024-12-05 19:59:27.866697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:26.625 [2024-12-05 19:59:27.866759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57027' to capture a snapshot of events at runtime. 00:06:26.625 [2024-12-05 19:59:27.866768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:26.625 [2024-12-05 19:59:27.866778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:26.625 [2024-12-05 19:59:27.866784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57027 for offline analysis/debug. 00:06:26.625 [2024-12-05 19:59:27.868073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.590 19:59:28 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.590 19:59:28 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:27.590 19:59:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:27.590 19:59:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:27.590 19:59:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:27.590 19:59:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:27.590 19:59:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.590 19:59:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.590 19:59:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.590 ************************************ 00:06:27.590 START TEST rpc_integrity 00:06:27.590 ************************************ 00:06:27.590 19:59:28 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:27.590 19:59:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:27.590 19:59:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.590 19:59:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.590 19:59:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.590 19:59:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:27.590 19:59:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:27.590 19:59:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:27.590 19:59:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:27.590 19:59:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.590 19:59:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.590 19:59:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.590 19:59:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:27.590 19:59:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:27.590 19:59:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.590 19:59:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.590 19:59:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.590 19:59:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:27.590 { 00:06:27.590 "name": "Malloc0", 00:06:27.590 "aliases": [ 00:06:27.590 "0e9d61d6-9b51-4459-945a-e1c45a6a9b84" 00:06:27.590 ], 00:06:27.590 "product_name": "Malloc disk", 00:06:27.590 "block_size": 512, 00:06:27.590 "num_blocks": 16384, 00:06:27.590 "uuid": "0e9d61d6-9b51-4459-945a-e1c45a6a9b84", 00:06:27.590 "assigned_rate_limits": { 00:06:27.590 "rw_ios_per_sec": 0, 00:06:27.590 "rw_mbytes_per_sec": 0, 00:06:27.590 "r_mbytes_per_sec": 0, 00:06:27.590 "w_mbytes_per_sec": 0 00:06:27.590 }, 00:06:27.590 "claimed": false, 00:06:27.590 "zoned": false, 00:06:27.590 "supported_io_types": { 00:06:27.590 "read": true, 00:06:27.590 "write": true, 00:06:27.590 "unmap": true, 00:06:27.590 "flush": true, 00:06:27.590 "reset": true, 00:06:27.590 "nvme_admin": false, 00:06:27.590 "nvme_io": false, 00:06:27.590 "nvme_io_md": false, 00:06:27.590 "write_zeroes": true, 00:06:27.590 "zcopy": true, 00:06:27.590 "get_zone_info": false, 00:06:27.590 "zone_management": false, 00:06:27.590 "zone_append": false, 00:06:27.590 "compare": false, 00:06:27.590 "compare_and_write": false, 00:06:27.590 "abort": true, 00:06:27.590 "seek_hole": false, 00:06:27.590 "seek_data": false, 00:06:27.590 "copy": true, 00:06:27.590 "nvme_iov_md": false 00:06:27.590 }, 00:06:27.590 "memory_domains": [ 00:06:27.590 { 00:06:27.590 "dma_device_id": "system", 00:06:27.590 "dma_device_type": 1 00:06:27.590 }, 00:06:27.590 { 00:06:27.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.590 "dma_device_type": 2 00:06:27.590 } 00:06:27.590 ], 00:06:27.590 "driver_specific": {} 00:06:27.590 } 00:06:27.590 ]' 00:06:27.590 19:59:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:27.590 19:59:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:27.590 19:59:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:27.590 19:59:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.590 19:59:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.590 [2024-12-05 19:59:28.925291] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:27.590 [2024-12-05 19:59:28.925361] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:27.590 [2024-12-05 19:59:28.925404] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:27.590 [2024-12-05 19:59:28.925433] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:27.590 [2024-12-05 19:59:28.927910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:27.590 [2024-12-05 19:59:28.927953] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:27.590 Passthru0 00:06:27.590 19:59:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.590 19:59:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:27.590 19:59:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.590 19:59:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.590 19:59:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.590 19:59:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:27.590 { 00:06:27.590 "name": "Malloc0", 00:06:27.590 "aliases": [ 00:06:27.590 "0e9d61d6-9b51-4459-945a-e1c45a6a9b84" 00:06:27.590 ], 00:06:27.590 "product_name": "Malloc disk", 00:06:27.590 "block_size": 512, 00:06:27.590 "num_blocks": 16384, 00:06:27.590 "uuid": "0e9d61d6-9b51-4459-945a-e1c45a6a9b84", 00:06:27.590 "assigned_rate_limits": { 00:06:27.590 "rw_ios_per_sec": 0, 00:06:27.590 "rw_mbytes_per_sec": 0, 00:06:27.590 "r_mbytes_per_sec": 0, 00:06:27.590 "w_mbytes_per_sec": 0 00:06:27.590 }, 00:06:27.590 "claimed": true, 00:06:27.590 "claim_type": "exclusive_write", 00:06:27.590 "zoned": false, 00:06:27.590 "supported_io_types": { 00:06:27.590 "read": true, 00:06:27.590 "write": true, 00:06:27.590 "unmap": true, 00:06:27.590 "flush": true, 00:06:27.590 "reset": true, 00:06:27.590 "nvme_admin": false, 00:06:27.590 "nvme_io": false, 00:06:27.590 "nvme_io_md": false, 00:06:27.590 "write_zeroes": true, 00:06:27.590 "zcopy": true, 00:06:27.590 "get_zone_info": false, 00:06:27.590 "zone_management": false, 00:06:27.590 "zone_append": false, 00:06:27.590 "compare": false, 00:06:27.590 "compare_and_write": false, 00:06:27.590 "abort": true, 00:06:27.590 "seek_hole": false, 00:06:27.590 "seek_data": false, 00:06:27.590 "copy": true, 00:06:27.590 "nvme_iov_md": false 00:06:27.590 }, 00:06:27.590 "memory_domains": [ 00:06:27.590 { 00:06:27.590 "dma_device_id": "system", 00:06:27.590 "dma_device_type": 1 00:06:27.590 }, 00:06:27.590 { 00:06:27.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.590 "dma_device_type": 2 00:06:27.590 } 00:06:27.590 ], 00:06:27.590 "driver_specific": {} 00:06:27.590 }, 00:06:27.590 { 00:06:27.590 "name": "Passthru0", 00:06:27.590 "aliases": [ 00:06:27.590 "230c22f8-cd93-5fd1-a80f-4c3280f6e6f7" 00:06:27.590 ], 00:06:27.590 "product_name": "passthru", 00:06:27.590 "block_size": 512, 00:06:27.590 "num_blocks": 16384, 00:06:27.590 "uuid": "230c22f8-cd93-5fd1-a80f-4c3280f6e6f7", 00:06:27.590 "assigned_rate_limits": { 00:06:27.590 "rw_ios_per_sec": 0, 00:06:27.590 "rw_mbytes_per_sec": 0, 00:06:27.590 "r_mbytes_per_sec": 0, 00:06:27.590 "w_mbytes_per_sec": 0 00:06:27.590 }, 00:06:27.590 "claimed": false, 00:06:27.590 "zoned": false, 00:06:27.590 "supported_io_types": { 00:06:27.590 "read": true, 00:06:27.590 "write": true, 00:06:27.590 "unmap": true, 00:06:27.590 "flush": true, 00:06:27.590 "reset": true, 00:06:27.590 "nvme_admin": false, 00:06:27.590 "nvme_io": false, 00:06:27.590 "nvme_io_md": false, 00:06:27.590 "write_zeroes": true, 00:06:27.590 "zcopy": true, 00:06:27.590 "get_zone_info": false, 00:06:27.590 "zone_management": false, 00:06:27.590 "zone_append": false, 00:06:27.590 "compare": false, 00:06:27.590 "compare_and_write": false, 00:06:27.590 "abort": true, 00:06:27.590 "seek_hole": false, 00:06:27.590 "seek_data": false, 00:06:27.590 "copy": true, 00:06:27.590 "nvme_iov_md": false 00:06:27.590 }, 00:06:27.590 "memory_domains": [ 00:06:27.590 { 00:06:27.590 "dma_device_id": "system", 00:06:27.590 "dma_device_type": 1 00:06:27.590 }, 00:06:27.590 { 00:06:27.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.590 "dma_device_type": 2 00:06:27.590 } 00:06:27.590 ], 00:06:27.590 "driver_specific": { 00:06:27.590 "passthru": { 00:06:27.590 "name": "Passthru0", 00:06:27.590 "base_bdev_name": "Malloc0" 00:06:27.590 } 00:06:27.590 } 00:06:27.590 } 00:06:27.590 ]' 00:06:27.590 19:59:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:27.590 19:59:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:27.590 19:59:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:27.591 19:59:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.591 19:59:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.591 19:59:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.591 19:59:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:27.591 19:59:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.591 19:59:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.853 19:59:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.853 19:59:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:27.853 19:59:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.853 19:59:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.853 19:59:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.853 19:59:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:27.853 19:59:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:27.853 ************************************ 00:06:27.853 END TEST rpc_integrity 00:06:27.853 ************************************ 00:06:27.853 19:59:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:27.853 00:06:27.853 real 0m0.364s 00:06:27.853 user 0m0.193s 00:06:27.853 sys 0m0.062s 00:06:27.853 19:59:29 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.853 19:59:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.853 19:59:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:27.853 19:59:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.853 19:59:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.853 19:59:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.853 ************************************ 00:06:27.853 START TEST rpc_plugins 00:06:27.853 ************************************ 00:06:27.853 19:59:29 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:27.853 19:59:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:27.853 19:59:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.853 19:59:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:27.853 19:59:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.853 19:59:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:27.853 19:59:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:27.853 19:59:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.853 19:59:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:27.853 19:59:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.853 19:59:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:27.853 { 00:06:27.853 "name": "Malloc1", 00:06:27.853 "aliases": [ 00:06:27.853 "3190cbb1-c0c0-49af-9b7a-13f65407f8eb" 00:06:27.853 ], 00:06:27.853 "product_name": "Malloc disk", 00:06:27.853 "block_size": 4096, 00:06:27.853 "num_blocks": 256, 00:06:27.853 "uuid": "3190cbb1-c0c0-49af-9b7a-13f65407f8eb", 00:06:27.853 "assigned_rate_limits": { 00:06:27.853 "rw_ios_per_sec": 0, 00:06:27.853 "rw_mbytes_per_sec": 0, 00:06:27.853 "r_mbytes_per_sec": 0, 00:06:27.853 "w_mbytes_per_sec": 0 00:06:27.853 }, 00:06:27.853 "claimed": false, 00:06:27.853 "zoned": false, 00:06:27.853 "supported_io_types": { 00:06:27.853 "read": true, 00:06:27.853 "write": true, 00:06:27.853 "unmap": true, 00:06:27.853 "flush": true, 00:06:27.853 "reset": true, 00:06:27.853 "nvme_admin": false, 00:06:27.853 "nvme_io": false, 00:06:27.853 "nvme_io_md": false, 00:06:27.853 "write_zeroes": true, 00:06:27.853 "zcopy": true, 00:06:27.853 "get_zone_info": false, 00:06:27.853 "zone_management": false, 00:06:27.853 "zone_append": false, 00:06:27.853 "compare": false, 00:06:27.853 "compare_and_write": false, 00:06:27.853 "abort": true, 00:06:27.853 "seek_hole": false, 00:06:27.853 "seek_data": false, 00:06:27.853 "copy": true, 00:06:27.854 "nvme_iov_md": false 00:06:27.854 }, 00:06:27.854 "memory_domains": [ 00:06:27.854 { 00:06:27.854 "dma_device_id": "system", 00:06:27.854 "dma_device_type": 1 00:06:27.854 }, 00:06:27.854 { 00:06:27.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.854 "dma_device_type": 2 00:06:27.854 } 00:06:27.854 ], 00:06:27.854 "driver_specific": {} 00:06:27.854 } 00:06:27.854 ]' 00:06:27.854 19:59:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:27.854 19:59:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:27.854 19:59:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:27.854 19:59:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.854 19:59:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:28.113 19:59:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.113 19:59:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:28.113 19:59:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.113 19:59:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:28.113 19:59:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.113 19:59:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:28.113 19:59:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:28.113 ************************************ 00:06:28.113 END TEST rpc_plugins 00:06:28.113 ************************************ 00:06:28.113 19:59:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:28.114 00:06:28.114 real 0m0.175s 00:06:28.114 user 0m0.106s 00:06:28.114 sys 0m0.018s 00:06:28.114 19:59:29 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.114 19:59:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:28.114 19:59:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:28.114 19:59:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.114 19:59:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.114 19:59:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.114 ************************************ 00:06:28.114 START TEST rpc_trace_cmd_test 00:06:28.114 ************************************ 00:06:28.114 19:59:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:28.114 19:59:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:28.114 19:59:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:28.114 19:59:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.114 19:59:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.114 19:59:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.114 19:59:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:28.114 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57027", 00:06:28.114 "tpoint_group_mask": "0x8", 00:06:28.114 "iscsi_conn": { 00:06:28.114 "mask": "0x2", 00:06:28.114 "tpoint_mask": "0x0" 00:06:28.114 }, 00:06:28.114 "scsi": { 00:06:28.114 "mask": "0x4", 00:06:28.114 "tpoint_mask": "0x0" 00:06:28.114 }, 00:06:28.114 "bdev": { 00:06:28.114 "mask": "0x8", 00:06:28.114 "tpoint_mask": "0xffffffffffffffff" 00:06:28.114 }, 00:06:28.114 "nvmf_rdma": { 00:06:28.114 "mask": "0x10", 00:06:28.114 "tpoint_mask": "0x0" 00:06:28.114 }, 00:06:28.114 "nvmf_tcp": { 00:06:28.114 "mask": "0x20", 00:06:28.114 "tpoint_mask": "0x0" 00:06:28.114 }, 00:06:28.114 "ftl": { 00:06:28.114 "mask": "0x40", 00:06:28.114 "tpoint_mask": "0x0" 00:06:28.114 }, 00:06:28.114 "blobfs": { 00:06:28.114 "mask": "0x80", 00:06:28.114 "tpoint_mask": "0x0" 00:06:28.114 }, 00:06:28.114 "dsa": { 00:06:28.114 "mask": "0x200", 00:06:28.114 "tpoint_mask": "0x0" 00:06:28.114 }, 00:06:28.114 "thread": { 00:06:28.114 "mask": "0x400", 00:06:28.114 "tpoint_mask": "0x0" 00:06:28.114 }, 00:06:28.114 "nvme_pcie": { 00:06:28.114 "mask": "0x800", 00:06:28.114 "tpoint_mask": "0x0" 00:06:28.114 }, 00:06:28.114 "iaa": { 00:06:28.114 "mask": "0x1000", 00:06:28.114 "tpoint_mask": "0x0" 00:06:28.114 }, 00:06:28.114 "nvme_tcp": { 00:06:28.114 "mask": "0x2000", 00:06:28.114 "tpoint_mask": "0x0" 00:06:28.114 }, 00:06:28.114 "bdev_nvme": { 00:06:28.114 "mask": "0x4000", 00:06:28.114 "tpoint_mask": "0x0" 00:06:28.114 }, 00:06:28.114 "sock": { 00:06:28.114 "mask": "0x8000", 00:06:28.114 "tpoint_mask": "0x0" 00:06:28.114 }, 00:06:28.114 "blob": { 00:06:28.114 "mask": "0x10000", 00:06:28.114 "tpoint_mask": "0x0" 00:06:28.114 }, 00:06:28.114 "bdev_raid": { 00:06:28.114 "mask": "0x20000", 00:06:28.114 "tpoint_mask": "0x0" 00:06:28.114 }, 00:06:28.114 "scheduler": { 00:06:28.114 "mask": "0x40000", 00:06:28.114 "tpoint_mask": "0x0" 00:06:28.114 } 00:06:28.114 }' 00:06:28.114 19:59:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:28.114 19:59:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:28.114 19:59:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:28.114 19:59:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:28.114 19:59:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:28.374 19:59:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:28.374 19:59:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:28.374 19:59:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:28.374 19:59:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:28.374 ************************************ 00:06:28.374 END TEST rpc_trace_cmd_test 00:06:28.374 ************************************ 00:06:28.374 19:59:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:28.374 00:06:28.374 real 0m0.247s 00:06:28.374 user 0m0.208s 00:06:28.374 sys 0m0.027s 00:06:28.374 19:59:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.374 19:59:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.374 19:59:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:28.374 19:59:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:28.374 19:59:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:28.374 19:59:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.374 19:59:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.374 19:59:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.374 ************************************ 00:06:28.374 START TEST rpc_daemon_integrity 00:06:28.374 ************************************ 00:06:28.374 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:28.374 19:59:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:28.374 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.374 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.374 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.374 19:59:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:28.374 19:59:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:28.374 19:59:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:28.374 19:59:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:28.374 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.374 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.374 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.634 19:59:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:28.634 19:59:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:28.634 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.634 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.634 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.634 19:59:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:28.634 { 00:06:28.634 "name": "Malloc2", 00:06:28.634 "aliases": [ 00:06:28.634 "870a2470-3c78-4c71-b707-e954fef4bd9e" 00:06:28.634 ], 00:06:28.634 "product_name": "Malloc disk", 00:06:28.634 "block_size": 512, 00:06:28.634 "num_blocks": 16384, 00:06:28.634 "uuid": "870a2470-3c78-4c71-b707-e954fef4bd9e", 00:06:28.634 "assigned_rate_limits": { 00:06:28.634 "rw_ios_per_sec": 0, 00:06:28.634 "rw_mbytes_per_sec": 0, 00:06:28.634 "r_mbytes_per_sec": 0, 00:06:28.634 "w_mbytes_per_sec": 0 00:06:28.634 }, 00:06:28.634 "claimed": false, 00:06:28.634 "zoned": false, 00:06:28.634 "supported_io_types": { 00:06:28.634 "read": true, 00:06:28.634 "write": true, 00:06:28.634 "unmap": true, 00:06:28.634 "flush": true, 00:06:28.634 "reset": true, 00:06:28.634 "nvme_admin": false, 00:06:28.634 "nvme_io": false, 00:06:28.634 "nvme_io_md": false, 00:06:28.634 "write_zeroes": true, 00:06:28.634 "zcopy": true, 00:06:28.634 "get_zone_info": false, 00:06:28.634 "zone_management": false, 00:06:28.634 "zone_append": false, 00:06:28.634 "compare": false, 00:06:28.634 "compare_and_write": false, 00:06:28.634 "abort": true, 00:06:28.634 "seek_hole": false, 00:06:28.634 "seek_data": false, 00:06:28.634 "copy": true, 00:06:28.634 "nvme_iov_md": false 00:06:28.634 }, 00:06:28.634 "memory_domains": [ 00:06:28.634 { 00:06:28.634 "dma_device_id": "system", 00:06:28.634 "dma_device_type": 1 00:06:28.634 }, 00:06:28.634 { 00:06:28.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.634 "dma_device_type": 2 00:06:28.634 } 00:06:28.634 ], 00:06:28.634 "driver_specific": {} 00:06:28.634 } 00:06:28.634 ]' 00:06:28.634 19:59:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:28.634 19:59:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:28.634 19:59:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:28.634 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.634 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.634 [2024-12-05 19:59:29.875936] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:28.634 [2024-12-05 19:59:29.875995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:28.634 [2024-12-05 19:59:29.876016] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:28.634 [2024-12-05 19:59:29.876028] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:28.634 [2024-12-05 19:59:29.878338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:28.634 [2024-12-05 19:59:29.878377] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:28.634 Passthru0 00:06:28.634 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.634 19:59:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:28.634 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.635 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.635 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.635 19:59:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:28.635 { 00:06:28.635 "name": "Malloc2", 00:06:28.635 "aliases": [ 00:06:28.635 "870a2470-3c78-4c71-b707-e954fef4bd9e" 00:06:28.635 ], 00:06:28.635 "product_name": "Malloc disk", 00:06:28.635 "block_size": 512, 00:06:28.635 "num_blocks": 16384, 00:06:28.635 "uuid": "870a2470-3c78-4c71-b707-e954fef4bd9e", 00:06:28.635 "assigned_rate_limits": { 00:06:28.635 "rw_ios_per_sec": 0, 00:06:28.635 "rw_mbytes_per_sec": 0, 00:06:28.635 "r_mbytes_per_sec": 0, 00:06:28.635 "w_mbytes_per_sec": 0 00:06:28.635 }, 00:06:28.635 "claimed": true, 00:06:28.635 "claim_type": "exclusive_write", 00:06:28.635 "zoned": false, 00:06:28.635 "supported_io_types": { 00:06:28.635 "read": true, 00:06:28.635 "write": true, 00:06:28.635 "unmap": true, 00:06:28.635 "flush": true, 00:06:28.635 "reset": true, 00:06:28.635 "nvme_admin": false, 00:06:28.635 "nvme_io": false, 00:06:28.635 "nvme_io_md": false, 00:06:28.635 "write_zeroes": true, 00:06:28.635 "zcopy": true, 00:06:28.635 "get_zone_info": false, 00:06:28.635 "zone_management": false, 00:06:28.635 "zone_append": false, 00:06:28.635 "compare": false, 00:06:28.635 "compare_and_write": false, 00:06:28.635 "abort": true, 00:06:28.635 "seek_hole": false, 00:06:28.635 "seek_data": false, 00:06:28.635 "copy": true, 00:06:28.635 "nvme_iov_md": false 00:06:28.635 }, 00:06:28.635 "memory_domains": [ 00:06:28.635 { 00:06:28.635 "dma_device_id": "system", 00:06:28.635 "dma_device_type": 1 00:06:28.635 }, 00:06:28.635 { 00:06:28.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.635 "dma_device_type": 2 00:06:28.635 } 00:06:28.635 ], 00:06:28.635 "driver_specific": {} 00:06:28.635 }, 00:06:28.635 { 00:06:28.635 "name": "Passthru0", 00:06:28.635 "aliases": [ 00:06:28.635 "d4109467-fbc4-5c29-94f5-88d8a60c21ff" 00:06:28.635 ], 00:06:28.635 "product_name": "passthru", 00:06:28.635 "block_size": 512, 00:06:28.635 "num_blocks": 16384, 00:06:28.635 "uuid": "d4109467-fbc4-5c29-94f5-88d8a60c21ff", 00:06:28.635 "assigned_rate_limits": { 00:06:28.635 "rw_ios_per_sec": 0, 00:06:28.635 "rw_mbytes_per_sec": 0, 00:06:28.635 "r_mbytes_per_sec": 0, 00:06:28.635 "w_mbytes_per_sec": 0 00:06:28.635 }, 00:06:28.635 "claimed": false, 00:06:28.635 "zoned": false, 00:06:28.635 "supported_io_types": { 00:06:28.635 "read": true, 00:06:28.635 "write": true, 00:06:28.635 "unmap": true, 00:06:28.635 "flush": true, 00:06:28.635 "reset": true, 00:06:28.635 "nvme_admin": false, 00:06:28.635 "nvme_io": false, 00:06:28.635 "nvme_io_md": false, 00:06:28.635 "write_zeroes": true, 00:06:28.635 "zcopy": true, 00:06:28.635 "get_zone_info": false, 00:06:28.635 "zone_management": false, 00:06:28.635 "zone_append": false, 00:06:28.635 "compare": false, 00:06:28.635 "compare_and_write": false, 00:06:28.635 "abort": true, 00:06:28.635 "seek_hole": false, 00:06:28.635 "seek_data": false, 00:06:28.635 "copy": true, 00:06:28.635 "nvme_iov_md": false 00:06:28.635 }, 00:06:28.635 "memory_domains": [ 00:06:28.635 { 00:06:28.635 "dma_device_id": "system", 00:06:28.635 "dma_device_type": 1 00:06:28.635 }, 00:06:28.635 { 00:06:28.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.635 "dma_device_type": 2 00:06:28.635 } 00:06:28.635 ], 00:06:28.635 "driver_specific": { 00:06:28.635 "passthru": { 00:06:28.635 "name": "Passthru0", 00:06:28.635 "base_bdev_name": "Malloc2" 00:06:28.635 } 00:06:28.635 } 00:06:28.635 } 00:06:28.635 ]' 00:06:28.635 19:59:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:28.635 19:59:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:28.635 19:59:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:28.635 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.635 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.635 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.635 19:59:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:28.635 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.635 19:59:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.635 19:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.635 19:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:28.635 19:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.635 19:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.635 19:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.635 19:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:28.635 19:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:28.894 ************************************ 00:06:28.894 END TEST rpc_daemon_integrity 00:06:28.894 ************************************ 00:06:28.894 19:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:28.894 00:06:28.894 real 0m0.348s 00:06:28.894 user 0m0.199s 00:06:28.894 sys 0m0.048s 00:06:28.894 19:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.894 19:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.895 19:59:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:28.895 19:59:30 rpc -- rpc/rpc.sh@84 -- # killprocess 57027 00:06:28.895 19:59:30 rpc -- common/autotest_common.sh@954 -- # '[' -z 57027 ']' 00:06:28.895 19:59:30 rpc -- common/autotest_common.sh@958 -- # kill -0 57027 00:06:28.895 19:59:30 rpc -- common/autotest_common.sh@959 -- # uname 00:06:28.895 19:59:30 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.895 19:59:30 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57027 00:06:28.895 killing process with pid 57027 00:06:28.895 19:59:30 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.895 19:59:30 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.895 19:59:30 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57027' 00:06:28.895 19:59:30 rpc -- common/autotest_common.sh@973 -- # kill 57027 00:06:28.895 19:59:30 rpc -- common/autotest_common.sh@978 -- # wait 57027 00:06:31.433 00:06:31.433 real 0m5.303s 00:06:31.433 user 0m5.899s 00:06:31.433 sys 0m0.889s 00:06:31.433 19:59:32 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.433 19:59:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.433 ************************************ 00:06:31.433 END TEST rpc 00:06:31.433 ************************************ 00:06:31.433 19:59:32 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:31.433 19:59:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.433 19:59:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.433 19:59:32 -- common/autotest_common.sh@10 -- # set +x 00:06:31.433 ************************************ 00:06:31.433 START TEST skip_rpc 00:06:31.433 ************************************ 00:06:31.433 19:59:32 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:31.433 * Looking for test storage... 00:06:31.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:31.433 19:59:32 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.433 19:59:32 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.433 19:59:32 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.433 19:59:32 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.433 19:59:32 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.433 19:59:32 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.434 19:59:32 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:31.434 19:59:32 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.434 19:59:32 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.434 --rc genhtml_branch_coverage=1 00:06:31.434 --rc genhtml_function_coverage=1 00:06:31.434 --rc genhtml_legend=1 00:06:31.434 --rc geninfo_all_blocks=1 00:06:31.434 --rc geninfo_unexecuted_blocks=1 00:06:31.434 00:06:31.434 ' 00:06:31.434 19:59:32 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.434 --rc genhtml_branch_coverage=1 00:06:31.434 --rc genhtml_function_coverage=1 00:06:31.434 --rc genhtml_legend=1 00:06:31.434 --rc geninfo_all_blocks=1 00:06:31.434 --rc geninfo_unexecuted_blocks=1 00:06:31.434 00:06:31.434 ' 00:06:31.434 19:59:32 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.434 --rc genhtml_branch_coverage=1 00:06:31.434 --rc genhtml_function_coverage=1 00:06:31.434 --rc genhtml_legend=1 00:06:31.434 --rc geninfo_all_blocks=1 00:06:31.434 --rc geninfo_unexecuted_blocks=1 00:06:31.434 00:06:31.434 ' 00:06:31.434 19:59:32 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.434 --rc genhtml_branch_coverage=1 00:06:31.434 --rc genhtml_function_coverage=1 00:06:31.434 --rc genhtml_legend=1 00:06:31.434 --rc geninfo_all_blocks=1 00:06:31.434 --rc geninfo_unexecuted_blocks=1 00:06:31.434 00:06:31.434 ' 00:06:31.434 19:59:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:31.434 19:59:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:31.434 19:59:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:31.434 19:59:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.434 19:59:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.434 19:59:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.434 ************************************ 00:06:31.434 START TEST skip_rpc 00:06:31.434 ************************************ 00:06:31.434 19:59:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:31.434 19:59:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57262 00:06:31.434 19:59:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:31.434 19:59:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.434 19:59:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:31.693 [2024-12-05 19:59:32.959344] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:31.694 [2024-12-05 19:59:32.959495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57262 ] 00:06:31.953 [2024-12-05 19:59:33.137733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.953 [2024-12-05 19:59:33.257473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57262 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57262 ']' 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57262 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57262 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.243 killing process with pid 57262 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57262' 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57262 00:06:37.243 19:59:37 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57262 00:06:39.171 00:06:39.171 real 0m7.443s 00:06:39.171 user 0m6.975s 00:06:39.171 sys 0m0.390s 00:06:39.171 19:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.171 19:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.171 ************************************ 00:06:39.171 END TEST skip_rpc 00:06:39.171 ************************************ 00:06:39.171 19:59:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:39.171 19:59:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.171 19:59:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.171 19:59:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.171 ************************************ 00:06:39.171 START TEST skip_rpc_with_json 00:06:39.171 ************************************ 00:06:39.171 19:59:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:39.171 19:59:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:39.171 19:59:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57366 00:06:39.171 19:59:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.171 19:59:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.171 19:59:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57366 00:06:39.171 19:59:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57366 ']' 00:06:39.171 19:59:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.172 19:59:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.172 19:59:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.172 19:59:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.172 19:59:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:39.172 [2024-12-05 19:59:40.458702] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:39.172 [2024-12-05 19:59:40.458861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57366 ] 00:06:39.431 [2024-12-05 19:59:40.634346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.431 [2024-12-05 19:59:40.750540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.370 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.370 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:40.370 19:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:40.370 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.370 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:40.370 [2024-12-05 19:59:41.637607] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:40.370 request: 00:06:40.370 { 00:06:40.370 "trtype": "tcp", 00:06:40.370 "method": "nvmf_get_transports", 00:06:40.370 "req_id": 1 00:06:40.370 } 00:06:40.370 Got JSON-RPC error response 00:06:40.370 response: 00:06:40.370 { 00:06:40.370 "code": -19, 00:06:40.370 "message": "No such device" 00:06:40.370 } 00:06:40.370 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:40.370 19:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:40.370 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.370 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:40.370 [2024-12-05 19:59:41.649721] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.370 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.370 19:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:40.370 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.370 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:40.630 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.630 19:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:40.630 { 00:06:40.630 "subsystems": [ 00:06:40.630 { 00:06:40.630 "subsystem": "fsdev", 00:06:40.630 "config": [ 00:06:40.630 { 00:06:40.630 "method": "fsdev_set_opts", 00:06:40.630 "params": { 00:06:40.630 "fsdev_io_pool_size": 65535, 00:06:40.630 "fsdev_io_cache_size": 256 00:06:40.630 } 00:06:40.630 } 00:06:40.630 ] 00:06:40.630 }, 00:06:40.630 { 00:06:40.631 "subsystem": "keyring", 00:06:40.631 "config": [] 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "subsystem": "iobuf", 00:06:40.631 "config": [ 00:06:40.631 { 00:06:40.631 "method": "iobuf_set_options", 00:06:40.631 "params": { 00:06:40.631 "small_pool_count": 8192, 00:06:40.631 "large_pool_count": 1024, 00:06:40.631 "small_bufsize": 8192, 00:06:40.631 "large_bufsize": 135168, 00:06:40.631 "enable_numa": false 00:06:40.631 } 00:06:40.631 } 00:06:40.631 ] 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "subsystem": "sock", 00:06:40.631 "config": [ 00:06:40.631 { 00:06:40.631 "method": "sock_set_default_impl", 00:06:40.631 "params": { 00:06:40.631 "impl_name": "posix" 00:06:40.631 } 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "method": "sock_impl_set_options", 00:06:40.631 "params": { 00:06:40.631 "impl_name": "ssl", 00:06:40.631 "recv_buf_size": 4096, 00:06:40.631 "send_buf_size": 4096, 00:06:40.631 "enable_recv_pipe": true, 00:06:40.631 "enable_quickack": false, 00:06:40.631 "enable_placement_id": 0, 00:06:40.631 "enable_zerocopy_send_server": true, 00:06:40.631 "enable_zerocopy_send_client": false, 00:06:40.631 "zerocopy_threshold": 0, 00:06:40.631 "tls_version": 0, 00:06:40.631 "enable_ktls": false 00:06:40.631 } 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "method": "sock_impl_set_options", 00:06:40.631 "params": { 00:06:40.631 "impl_name": "posix", 00:06:40.631 "recv_buf_size": 2097152, 00:06:40.631 "send_buf_size": 2097152, 00:06:40.631 "enable_recv_pipe": true, 00:06:40.631 "enable_quickack": false, 00:06:40.631 "enable_placement_id": 0, 00:06:40.631 "enable_zerocopy_send_server": true, 00:06:40.631 "enable_zerocopy_send_client": false, 00:06:40.631 "zerocopy_threshold": 0, 00:06:40.631 "tls_version": 0, 00:06:40.631 "enable_ktls": false 00:06:40.631 } 00:06:40.631 } 00:06:40.631 ] 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "subsystem": "vmd", 00:06:40.631 "config": [] 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "subsystem": "accel", 00:06:40.631 "config": [ 00:06:40.631 { 00:06:40.631 "method": "accel_set_options", 00:06:40.631 "params": { 00:06:40.631 "small_cache_size": 128, 00:06:40.631 "large_cache_size": 16, 00:06:40.631 "task_count": 2048, 00:06:40.631 "sequence_count": 2048, 00:06:40.631 "buf_count": 2048 00:06:40.631 } 00:06:40.631 } 00:06:40.631 ] 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "subsystem": "bdev", 00:06:40.631 "config": [ 00:06:40.631 { 00:06:40.631 "method": "bdev_set_options", 00:06:40.631 "params": { 00:06:40.631 "bdev_io_pool_size": 65535, 00:06:40.631 "bdev_io_cache_size": 256, 00:06:40.631 "bdev_auto_examine": true, 00:06:40.631 "iobuf_small_cache_size": 128, 00:06:40.631 "iobuf_large_cache_size": 16 00:06:40.631 } 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "method": "bdev_raid_set_options", 00:06:40.631 "params": { 00:06:40.631 "process_window_size_kb": 1024, 00:06:40.631 "process_max_bandwidth_mb_sec": 0 00:06:40.631 } 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "method": "bdev_iscsi_set_options", 00:06:40.631 "params": { 00:06:40.631 "timeout_sec": 30 00:06:40.631 } 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "method": "bdev_nvme_set_options", 00:06:40.631 "params": { 00:06:40.631 "action_on_timeout": "none", 00:06:40.631 "timeout_us": 0, 00:06:40.631 "timeout_admin_us": 0, 00:06:40.631 "keep_alive_timeout_ms": 10000, 00:06:40.631 "arbitration_burst": 0, 00:06:40.631 "low_priority_weight": 0, 00:06:40.631 "medium_priority_weight": 0, 00:06:40.631 "high_priority_weight": 0, 00:06:40.631 "nvme_adminq_poll_period_us": 10000, 00:06:40.631 "nvme_ioq_poll_period_us": 0, 00:06:40.631 "io_queue_requests": 0, 00:06:40.631 "delay_cmd_submit": true, 00:06:40.631 "transport_retry_count": 4, 00:06:40.631 "bdev_retry_count": 3, 00:06:40.631 "transport_ack_timeout": 0, 00:06:40.631 "ctrlr_loss_timeout_sec": 0, 00:06:40.631 "reconnect_delay_sec": 0, 00:06:40.631 "fast_io_fail_timeout_sec": 0, 00:06:40.631 "disable_auto_failback": false, 00:06:40.631 "generate_uuids": false, 00:06:40.631 "transport_tos": 0, 00:06:40.631 "nvme_error_stat": false, 00:06:40.631 "rdma_srq_size": 0, 00:06:40.631 "io_path_stat": false, 00:06:40.631 "allow_accel_sequence": false, 00:06:40.631 "rdma_max_cq_size": 0, 00:06:40.631 "rdma_cm_event_timeout_ms": 0, 00:06:40.631 "dhchap_digests": [ 00:06:40.631 "sha256", 00:06:40.631 "sha384", 00:06:40.631 "sha512" 00:06:40.631 ], 00:06:40.631 "dhchap_dhgroups": [ 00:06:40.631 "null", 00:06:40.631 "ffdhe2048", 00:06:40.631 "ffdhe3072", 00:06:40.631 "ffdhe4096", 00:06:40.631 "ffdhe6144", 00:06:40.631 "ffdhe8192" 00:06:40.631 ] 00:06:40.631 } 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "method": "bdev_nvme_set_hotplug", 00:06:40.631 "params": { 00:06:40.631 "period_us": 100000, 00:06:40.631 "enable": false 00:06:40.631 } 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "method": "bdev_wait_for_examine" 00:06:40.631 } 00:06:40.631 ] 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "subsystem": "scsi", 00:06:40.631 "config": null 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "subsystem": "scheduler", 00:06:40.631 "config": [ 00:06:40.631 { 00:06:40.631 "method": "framework_set_scheduler", 00:06:40.631 "params": { 00:06:40.631 "name": "static" 00:06:40.631 } 00:06:40.631 } 00:06:40.631 ] 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "subsystem": "vhost_scsi", 00:06:40.631 "config": [] 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "subsystem": "vhost_blk", 00:06:40.631 "config": [] 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "subsystem": "ublk", 00:06:40.631 "config": [] 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "subsystem": "nbd", 00:06:40.631 "config": [] 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "subsystem": "nvmf", 00:06:40.631 "config": [ 00:06:40.631 { 00:06:40.631 "method": "nvmf_set_config", 00:06:40.631 "params": { 00:06:40.631 "discovery_filter": "match_any", 00:06:40.631 "admin_cmd_passthru": { 00:06:40.631 "identify_ctrlr": false 00:06:40.631 }, 00:06:40.631 "dhchap_digests": [ 00:06:40.631 "sha256", 00:06:40.631 "sha384", 00:06:40.631 "sha512" 00:06:40.631 ], 00:06:40.631 "dhchap_dhgroups": [ 00:06:40.631 "null", 00:06:40.631 "ffdhe2048", 00:06:40.631 "ffdhe3072", 00:06:40.631 "ffdhe4096", 00:06:40.631 "ffdhe6144", 00:06:40.631 "ffdhe8192" 00:06:40.631 ] 00:06:40.631 } 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "method": "nvmf_set_max_subsystems", 00:06:40.631 "params": { 00:06:40.631 "max_subsystems": 1024 00:06:40.631 } 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "method": "nvmf_set_crdt", 00:06:40.631 "params": { 00:06:40.631 "crdt1": 0, 00:06:40.631 "crdt2": 0, 00:06:40.631 "crdt3": 0 00:06:40.631 } 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "method": "nvmf_create_transport", 00:06:40.631 "params": { 00:06:40.631 "trtype": "TCP", 00:06:40.631 "max_queue_depth": 128, 00:06:40.631 "max_io_qpairs_per_ctrlr": 127, 00:06:40.631 "in_capsule_data_size": 4096, 00:06:40.631 "max_io_size": 131072, 00:06:40.631 "io_unit_size": 131072, 00:06:40.631 "max_aq_depth": 128, 00:06:40.631 "num_shared_buffers": 511, 00:06:40.631 "buf_cache_size": 4294967295, 00:06:40.631 "dif_insert_or_strip": false, 00:06:40.631 "zcopy": false, 00:06:40.631 "c2h_success": true, 00:06:40.631 "sock_priority": 0, 00:06:40.631 "abort_timeout_sec": 1, 00:06:40.631 "ack_timeout": 0, 00:06:40.631 "data_wr_pool_size": 0 00:06:40.631 } 00:06:40.631 } 00:06:40.631 ] 00:06:40.631 }, 00:06:40.631 { 00:06:40.631 "subsystem": "iscsi", 00:06:40.631 "config": [ 00:06:40.631 { 00:06:40.631 "method": "iscsi_set_options", 00:06:40.631 "params": { 00:06:40.631 "node_base": "iqn.2016-06.io.spdk", 00:06:40.631 "max_sessions": 128, 00:06:40.631 "max_connections_per_session": 2, 00:06:40.631 "max_queue_depth": 64, 00:06:40.631 "default_time2wait": 2, 00:06:40.631 "default_time2retain": 20, 00:06:40.631 "first_burst_length": 8192, 00:06:40.631 "immediate_data": true, 00:06:40.631 "allow_duplicated_isid": false, 00:06:40.631 "error_recovery_level": 0, 00:06:40.631 "nop_timeout": 60, 00:06:40.631 "nop_in_interval": 30, 00:06:40.631 "disable_chap": false, 00:06:40.631 "require_chap": false, 00:06:40.631 "mutual_chap": false, 00:06:40.631 "chap_group": 0, 00:06:40.631 "max_large_datain_per_connection": 64, 00:06:40.631 "max_r2t_per_connection": 4, 00:06:40.631 "pdu_pool_size": 36864, 00:06:40.631 "immediate_data_pool_size": 16384, 00:06:40.631 "data_out_pool_size": 2048 00:06:40.631 } 00:06:40.631 } 00:06:40.631 ] 00:06:40.631 } 00:06:40.631 ] 00:06:40.631 } 00:06:40.631 19:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:40.631 19:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57366 00:06:40.632 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57366 ']' 00:06:40.632 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57366 00:06:40.632 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:40.632 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.632 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57366 00:06:40.632 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.632 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.632 killing process with pid 57366 00:06:40.632 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57366' 00:06:40.632 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57366 00:06:40.632 19:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57366 00:06:43.167 19:59:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57422 00:06:43.167 19:59:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:43.167 19:59:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:48.467 19:59:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57422 00:06:48.467 19:59:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57422 ']' 00:06:48.467 19:59:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57422 00:06:48.467 19:59:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:48.467 19:59:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.467 19:59:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57422 00:06:48.467 killing process with pid 57422 00:06:48.467 19:59:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.467 19:59:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.467 19:59:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57422' 00:06:48.467 19:59:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57422 00:06:48.467 19:59:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57422 00:06:50.375 19:59:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:50.375 19:59:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:50.375 00:06:50.375 real 0m11.362s 00:06:50.375 user 0m10.857s 00:06:50.375 sys 0m0.817s 00:06:50.375 19:59:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.376 19:59:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:50.376 ************************************ 00:06:50.376 END TEST skip_rpc_with_json 00:06:50.376 ************************************ 00:06:50.376 19:59:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:50.376 19:59:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.376 19:59:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.376 19:59:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.376 ************************************ 00:06:50.376 START TEST skip_rpc_with_delay 00:06:50.376 ************************************ 00:06:50.376 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:50.376 19:59:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:50.376 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:50.376 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:50.376 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.376 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.376 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.376 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.376 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.376 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.376 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.376 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:50.376 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:50.636 [2024-12-05 19:59:51.897876] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:50.636 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:50.636 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.636 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.636 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.636 00:06:50.636 real 0m0.186s 00:06:50.636 user 0m0.111s 00:06:50.636 sys 0m0.073s 00:06:50.636 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.636 19:59:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:50.636 ************************************ 00:06:50.636 END TEST skip_rpc_with_delay 00:06:50.636 ************************************ 00:06:50.636 19:59:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:50.636 19:59:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:50.636 19:59:52 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:50.636 19:59:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.636 19:59:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.636 19:59:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.636 ************************************ 00:06:50.636 START TEST exit_on_failed_rpc_init 00:06:50.636 ************************************ 00:06:50.636 19:59:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:50.636 19:59:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57550 00:06:50.636 19:59:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.636 19:59:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57550 00:06:50.636 19:59:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57550 ']' 00:06:50.636 19:59:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.636 19:59:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.636 19:59:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.636 19:59:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.636 19:59:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:50.896 [2024-12-05 19:59:52.127726] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:50.896 [2024-12-05 19:59:52.127848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57550 ] 00:06:50.896 [2024-12-05 19:59:52.302378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.155 [2024-12-05 19:59:52.424070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.096 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.096 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:52.096 19:59:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.096 19:59:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:52.096 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:52.096 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:52.096 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.096 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.096 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.096 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.096 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.096 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.096 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.096 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:52.096 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:52.096 [2024-12-05 19:59:53.417042] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:52.096 [2024-12-05 19:59:53.417160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57574 ] 00:06:52.356 [2024-12-05 19:59:53.591216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.356 [2024-12-05 19:59:53.709667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.356 [2024-12-05 19:59:53.709770] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:52.356 [2024-12-05 19:59:53.709784] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:52.356 [2024-12-05 19:59:53.709798] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.616 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:52.616 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.616 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:52.616 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:52.616 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:52.616 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.616 19:59:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:52.616 19:59:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57550 00:06:52.616 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57550 ']' 00:06:52.616 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57550 00:06:52.616 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:52.616 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.616 19:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57550 00:06:52.616 19:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.616 19:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.616 19:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57550' 00:06:52.616 killing process with pid 57550 00:06:52.616 19:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57550 00:06:52.616 19:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57550 00:06:55.154 00:06:55.154 real 0m4.434s 00:06:55.154 user 0m4.805s 00:06:55.154 sys 0m0.573s 00:06:55.154 19:59:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.154 19:59:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:55.154 ************************************ 00:06:55.154 END TEST exit_on_failed_rpc_init 00:06:55.154 ************************************ 00:06:55.154 19:59:56 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:55.154 ************************************ 00:06:55.154 END TEST skip_rpc 00:06:55.154 ************************************ 00:06:55.154 00:06:55.154 real 0m23.901s 00:06:55.154 user 0m22.949s 00:06:55.154 sys 0m2.155s 00:06:55.154 19:59:56 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.154 19:59:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.154 19:59:56 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:55.154 19:59:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.154 19:59:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.154 19:59:56 -- common/autotest_common.sh@10 -- # set +x 00:06:55.154 ************************************ 00:06:55.154 START TEST rpc_client 00:06:55.154 ************************************ 00:06:55.154 19:59:56 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:55.413 * Looking for test storage... 00:06:55.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:55.414 19:59:56 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.414 19:59:56 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:55.414 19:59:56 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.414 19:59:56 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.414 19:59:56 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:55.414 19:59:56 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.414 19:59:56 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:55.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.414 --rc genhtml_branch_coverage=1 00:06:55.414 --rc genhtml_function_coverage=1 00:06:55.414 --rc genhtml_legend=1 00:06:55.414 --rc geninfo_all_blocks=1 00:06:55.414 --rc geninfo_unexecuted_blocks=1 00:06:55.414 00:06:55.414 ' 00:06:55.414 19:59:56 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:55.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.414 --rc genhtml_branch_coverage=1 00:06:55.414 --rc genhtml_function_coverage=1 00:06:55.414 --rc genhtml_legend=1 00:06:55.414 --rc geninfo_all_blocks=1 00:06:55.414 --rc geninfo_unexecuted_blocks=1 00:06:55.414 00:06:55.414 ' 00:06:55.414 19:59:56 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:55.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.414 --rc genhtml_branch_coverage=1 00:06:55.414 --rc genhtml_function_coverage=1 00:06:55.414 --rc genhtml_legend=1 00:06:55.414 --rc geninfo_all_blocks=1 00:06:55.414 --rc geninfo_unexecuted_blocks=1 00:06:55.414 00:06:55.414 ' 00:06:55.414 19:59:56 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:55.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.414 --rc genhtml_branch_coverage=1 00:06:55.414 --rc genhtml_function_coverage=1 00:06:55.414 --rc genhtml_legend=1 00:06:55.414 --rc geninfo_all_blocks=1 00:06:55.414 --rc geninfo_unexecuted_blocks=1 00:06:55.414 00:06:55.414 ' 00:06:55.414 19:59:56 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:55.414 OK 00:06:55.673 19:59:56 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:55.673 00:06:55.673 real 0m0.294s 00:06:55.673 user 0m0.160s 00:06:55.673 sys 0m0.150s 00:06:55.673 19:59:56 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.673 19:59:56 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:55.673 ************************************ 00:06:55.673 END TEST rpc_client 00:06:55.673 ************************************ 00:06:55.673 19:59:56 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:55.673 19:59:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.673 19:59:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.673 19:59:56 -- common/autotest_common.sh@10 -- # set +x 00:06:55.673 ************************************ 00:06:55.673 START TEST json_config 00:06:55.673 ************************************ 00:06:55.673 19:59:56 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:55.673 19:59:57 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.673 19:59:57 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.673 19:59:57 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:55.673 19:59:57 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:55.673 19:59:57 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.673 19:59:57 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.673 19:59:57 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.673 19:59:57 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.673 19:59:57 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.673 19:59:57 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.673 19:59:57 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.673 19:59:57 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.673 19:59:57 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.673 19:59:57 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.673 19:59:57 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.673 19:59:57 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:55.673 19:59:57 json_config -- scripts/common.sh@345 -- # : 1 00:06:55.673 19:59:57 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.673 19:59:57 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.673 19:59:57 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:55.673 19:59:57 json_config -- scripts/common.sh@353 -- # local d=1 00:06:55.673 19:59:57 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.673 19:59:57 json_config -- scripts/common.sh@355 -- # echo 1 00:06:55.673 19:59:57 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.673 19:59:57 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:55.673 19:59:57 json_config -- scripts/common.sh@353 -- # local d=2 00:06:55.673 19:59:57 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.673 19:59:57 json_config -- scripts/common.sh@355 -- # echo 2 00:06:55.934 19:59:57 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.934 19:59:57 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.934 19:59:57 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.934 19:59:57 json_config -- scripts/common.sh@368 -- # return 0 00:06:55.934 19:59:57 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.934 19:59:57 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.934 --rc genhtml_branch_coverage=1 00:06:55.934 --rc genhtml_function_coverage=1 00:06:55.934 --rc genhtml_legend=1 00:06:55.934 --rc geninfo_all_blocks=1 00:06:55.934 --rc geninfo_unexecuted_blocks=1 00:06:55.934 00:06:55.934 ' 00:06:55.934 19:59:57 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.934 --rc genhtml_branch_coverage=1 00:06:55.934 --rc genhtml_function_coverage=1 00:06:55.934 --rc genhtml_legend=1 00:06:55.934 --rc geninfo_all_blocks=1 00:06:55.934 --rc geninfo_unexecuted_blocks=1 00:06:55.934 00:06:55.934 ' 00:06:55.934 19:59:57 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.934 --rc genhtml_branch_coverage=1 00:06:55.934 --rc genhtml_function_coverage=1 00:06:55.934 --rc genhtml_legend=1 00:06:55.934 --rc geninfo_all_blocks=1 00:06:55.934 --rc geninfo_unexecuted_blocks=1 00:06:55.934 00:06:55.934 ' 00:06:55.934 19:59:57 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.934 --rc genhtml_branch_coverage=1 00:06:55.934 --rc genhtml_function_coverage=1 00:06:55.934 --rc genhtml_legend=1 00:06:55.934 --rc geninfo_all_blocks=1 00:06:55.934 --rc geninfo_unexecuted_blocks=1 00:06:55.934 00:06:55.934 ' 00:06:55.934 19:59:57 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a1227725-a2f3-4c37-9707-dd6ea6fa1adb 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a1227725-a2f3-4c37-9707-dd6ea6fa1adb 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:55.934 19:59:57 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.934 19:59:57 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.934 19:59:57 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.934 19:59:57 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.934 19:59:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.934 19:59:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.934 19:59:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.934 19:59:57 json_config -- paths/export.sh@5 -- # export PATH 00:06:55.934 19:59:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@51 -- # : 0 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:55.934 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:55.934 19:59:57 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:55.934 19:59:57 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:55.934 19:59:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:55.934 19:59:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:55.934 19:59:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:55.934 19:59:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:55.934 WARNING: No tests are enabled so not running JSON configuration tests 00:06:55.934 19:59:57 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:55.934 19:59:57 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:55.934 00:06:55.934 real 0m0.227s 00:06:55.934 user 0m0.139s 00:06:55.934 sys 0m0.097s 00:06:55.934 19:59:57 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.934 19:59:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.934 ************************************ 00:06:55.934 END TEST json_config 00:06:55.934 ************************************ 00:06:55.934 19:59:57 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:55.934 19:59:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.934 19:59:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.934 19:59:57 -- common/autotest_common.sh@10 -- # set +x 00:06:55.934 ************************************ 00:06:55.934 START TEST json_config_extra_key 00:06:55.934 ************************************ 00:06:55.934 19:59:57 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:55.934 19:59:57 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.934 19:59:57 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.934 19:59:57 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:56.194 19:59:57 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.194 19:59:57 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:56.194 19:59:57 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.194 19:59:57 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:56.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.194 --rc genhtml_branch_coverage=1 00:06:56.194 --rc genhtml_function_coverage=1 00:06:56.194 --rc genhtml_legend=1 00:06:56.194 --rc geninfo_all_blocks=1 00:06:56.194 --rc geninfo_unexecuted_blocks=1 00:06:56.194 00:06:56.194 ' 00:06:56.194 19:59:57 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:56.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.194 --rc genhtml_branch_coverage=1 00:06:56.194 --rc genhtml_function_coverage=1 00:06:56.194 --rc genhtml_legend=1 00:06:56.194 --rc geninfo_all_blocks=1 00:06:56.194 --rc geninfo_unexecuted_blocks=1 00:06:56.194 00:06:56.194 ' 00:06:56.194 19:59:57 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:56.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.194 --rc genhtml_branch_coverage=1 00:06:56.194 --rc genhtml_function_coverage=1 00:06:56.194 --rc genhtml_legend=1 00:06:56.194 --rc geninfo_all_blocks=1 00:06:56.194 --rc geninfo_unexecuted_blocks=1 00:06:56.194 00:06:56.194 ' 00:06:56.194 19:59:57 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:56.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.194 --rc genhtml_branch_coverage=1 00:06:56.194 --rc genhtml_function_coverage=1 00:06:56.194 --rc genhtml_legend=1 00:06:56.194 --rc geninfo_all_blocks=1 00:06:56.194 --rc geninfo_unexecuted_blocks=1 00:06:56.194 00:06:56.194 ' 00:06:56.194 19:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:56.194 19:59:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:56.194 19:59:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.194 19:59:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.194 19:59:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.194 19:59:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.194 19:59:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.194 19:59:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.194 19:59:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.194 19:59:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.194 19:59:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.194 19:59:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.194 19:59:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a1227725-a2f3-4c37-9707-dd6ea6fa1adb 00:06:56.194 19:59:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a1227725-a2f3-4c37-9707-dd6ea6fa1adb 00:06:56.195 19:59:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.195 19:59:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:56.195 19:59:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:56.195 19:59:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:56.195 19:59:57 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:56.195 19:59:57 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:56.195 19:59:57 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.195 19:59:57 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.195 19:59:57 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.195 19:59:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.195 19:59:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.195 19:59:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.195 19:59:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:56.195 19:59:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.195 19:59:57 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:56.195 19:59:57 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:56.195 19:59:57 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:56.195 19:59:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:56.195 19:59:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.195 19:59:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.195 19:59:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:56.195 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:56.195 19:59:57 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:56.195 19:59:57 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:56.195 19:59:57 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:56.195 19:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:56.195 19:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:56.195 19:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:56.195 19:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:56.195 19:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:56.195 19:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:56.195 19:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:56.195 19:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:56.195 19:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:56.195 19:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:56.195 INFO: launching applications... 00:06:56.195 19:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:56.195 19:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:56.195 19:59:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:56.195 19:59:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:56.195 19:59:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:56.195 19:59:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:56.195 19:59:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:56.195 19:59:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:56.195 19:59:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:56.195 19:59:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57784 00:06:56.195 Waiting for target to run... 00:06:56.195 19:59:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:56.195 19:59:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57784 /var/tmp/spdk_tgt.sock 00:06:56.195 19:59:57 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57784 ']' 00:06:56.195 19:59:57 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:56.195 19:59:57 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:56.195 19:59:57 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:56.195 19:59:57 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:56.195 19:59:57 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.195 19:59:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:56.195 [2024-12-05 19:59:57.557627] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:56.195 [2024-12-05 19:59:57.557761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57784 ] 00:06:56.766 [2024-12-05 19:59:57.950590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.766 [2024-12-05 19:59:58.062085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.707 19:59:58 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.707 19:59:58 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:57.707 19:59:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:57.707 00:06:57.707 19:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:57.707 INFO: shutting down applications... 00:06:57.707 19:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:57.707 19:59:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:57.707 19:59:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:57.707 19:59:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57784 ]] 00:06:57.707 19:59:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57784 00:06:57.707 19:59:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:57.707 19:59:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:57.707 19:59:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57784 00:06:57.707 19:59:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:57.967 19:59:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:57.967 19:59:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:57.967 19:59:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57784 00:06:57.967 19:59:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:58.538 19:59:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:58.538 19:59:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:58.538 19:59:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57784 00:06:58.538 19:59:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:59.107 20:00:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:59.107 20:00:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:59.107 20:00:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57784 00:06:59.107 20:00:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:59.677 20:00:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:59.677 20:00:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:59.677 20:00:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57784 00:06:59.677 20:00:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:59.936 20:00:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:59.937 20:00:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:59.937 20:00:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57784 00:06:59.937 20:00:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:00.506 20:00:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:00.506 20:00:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:00.506 20:00:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57784 00:07:00.506 SPDK target shutdown done 00:07:00.506 Success 00:07:00.506 20:00:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:00.506 20:00:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:00.506 20:00:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:00.506 20:00:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:00.506 20:00:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:00.506 00:07:00.506 real 0m4.648s 00:07:00.506 user 0m4.177s 00:07:00.506 sys 0m0.593s 00:07:00.506 20:00:01 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.506 20:00:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:00.506 ************************************ 00:07:00.506 END TEST json_config_extra_key 00:07:00.506 ************************************ 00:07:00.506 20:00:01 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:00.506 20:00:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.506 20:00:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.506 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:07:00.506 ************************************ 00:07:00.506 START TEST alias_rpc 00:07:00.506 ************************************ 00:07:00.506 20:00:01 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:00.767 * Looking for test storage... 00:07:00.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:00.767 20:00:02 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:00.767 20:00:02 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:00.767 20:00:02 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:00.767 20:00:02 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.767 20:00:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:00.767 20:00:02 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.767 20:00:02 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:00.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.767 --rc genhtml_branch_coverage=1 00:07:00.767 --rc genhtml_function_coverage=1 00:07:00.767 --rc genhtml_legend=1 00:07:00.767 --rc geninfo_all_blocks=1 00:07:00.767 --rc geninfo_unexecuted_blocks=1 00:07:00.767 00:07:00.767 ' 00:07:00.767 20:00:02 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:00.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.767 --rc genhtml_branch_coverage=1 00:07:00.767 --rc genhtml_function_coverage=1 00:07:00.767 --rc genhtml_legend=1 00:07:00.767 --rc geninfo_all_blocks=1 00:07:00.767 --rc geninfo_unexecuted_blocks=1 00:07:00.767 00:07:00.767 ' 00:07:00.767 20:00:02 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:00.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.767 --rc genhtml_branch_coverage=1 00:07:00.767 --rc genhtml_function_coverage=1 00:07:00.767 --rc genhtml_legend=1 00:07:00.767 --rc geninfo_all_blocks=1 00:07:00.767 --rc geninfo_unexecuted_blocks=1 00:07:00.767 00:07:00.767 ' 00:07:00.767 20:00:02 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:00.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.767 --rc genhtml_branch_coverage=1 00:07:00.767 --rc genhtml_function_coverage=1 00:07:00.767 --rc genhtml_legend=1 00:07:00.767 --rc geninfo_all_blocks=1 00:07:00.767 --rc geninfo_unexecuted_blocks=1 00:07:00.767 00:07:00.767 ' 00:07:00.767 20:00:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:00.767 20:00:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57895 00:07:00.767 20:00:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:00.767 20:00:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57895 00:07:00.767 20:00:02 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57895 ']' 00:07:00.767 20:00:02 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.767 20:00:02 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.767 20:00:02 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.767 20:00:02 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.767 20:00:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.027 [2024-12-05 20:00:02.245662] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:07:01.027 [2024-12-05 20:00:02.245790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57895 ] 00:07:01.027 [2024-12-05 20:00:02.420990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.286 [2024-12-05 20:00:02.537121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.248 20:00:03 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.248 20:00:03 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:02.248 20:00:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:02.248 20:00:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57895 00:07:02.248 20:00:03 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57895 ']' 00:07:02.248 20:00:03 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57895 00:07:02.248 20:00:03 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:02.248 20:00:03 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.248 20:00:03 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57895 00:07:02.507 20:00:03 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.507 20:00:03 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.507 killing process with pid 57895 00:07:02.507 20:00:03 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57895' 00:07:02.507 20:00:03 alias_rpc -- common/autotest_common.sh@973 -- # kill 57895 00:07:02.507 20:00:03 alias_rpc -- common/autotest_common.sh@978 -- # wait 57895 00:07:05.044 00:07:05.044 real 0m4.138s 00:07:05.044 user 0m4.160s 00:07:05.044 sys 0m0.558s 00:07:05.044 20:00:06 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.044 20:00:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.044 ************************************ 00:07:05.044 END TEST alias_rpc 00:07:05.044 ************************************ 00:07:05.044 20:00:06 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:05.044 20:00:06 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:05.044 20:00:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.044 20:00:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.044 20:00:06 -- common/autotest_common.sh@10 -- # set +x 00:07:05.044 ************************************ 00:07:05.044 START TEST spdkcli_tcp 00:07:05.044 ************************************ 00:07:05.044 20:00:06 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:05.044 * Looking for test storage... 00:07:05.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:05.044 20:00:06 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:05.044 20:00:06 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:05.044 20:00:06 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:05.044 20:00:06 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.044 20:00:06 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:05.044 20:00:06 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.044 20:00:06 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:05.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.044 --rc genhtml_branch_coverage=1 00:07:05.044 --rc genhtml_function_coverage=1 00:07:05.044 --rc genhtml_legend=1 00:07:05.044 --rc geninfo_all_blocks=1 00:07:05.044 --rc geninfo_unexecuted_blocks=1 00:07:05.044 00:07:05.044 ' 00:07:05.044 20:00:06 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:05.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.044 --rc genhtml_branch_coverage=1 00:07:05.044 --rc genhtml_function_coverage=1 00:07:05.044 --rc genhtml_legend=1 00:07:05.044 --rc geninfo_all_blocks=1 00:07:05.044 --rc geninfo_unexecuted_blocks=1 00:07:05.044 00:07:05.044 ' 00:07:05.044 20:00:06 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:05.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.044 --rc genhtml_branch_coverage=1 00:07:05.044 --rc genhtml_function_coverage=1 00:07:05.044 --rc genhtml_legend=1 00:07:05.044 --rc geninfo_all_blocks=1 00:07:05.044 --rc geninfo_unexecuted_blocks=1 00:07:05.044 00:07:05.044 ' 00:07:05.044 20:00:06 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:05.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.044 --rc genhtml_branch_coverage=1 00:07:05.044 --rc genhtml_function_coverage=1 00:07:05.044 --rc genhtml_legend=1 00:07:05.044 --rc geninfo_all_blocks=1 00:07:05.044 --rc geninfo_unexecuted_blocks=1 00:07:05.044 00:07:05.044 ' 00:07:05.044 20:00:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:05.045 20:00:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:05.045 20:00:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:05.045 20:00:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:05.045 20:00:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:05.045 20:00:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:05.045 20:00:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:05.045 20:00:06 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:05.045 20:00:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.045 20:00:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58002 00:07:05.045 20:00:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:05.045 20:00:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58002 00:07:05.045 20:00:06 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58002 ']' 00:07:05.045 20:00:06 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.045 20:00:06 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.045 20:00:06 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.045 20:00:06 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.045 20:00:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.045 [2024-12-05 20:00:06.474031] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:07:05.045 [2024-12-05 20:00:06.474167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58002 ] 00:07:05.304 [2024-12-05 20:00:06.653719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.564 [2024-12-05 20:00:06.770050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.564 [2024-12-05 20:00:06.770087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.502 20:00:07 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.502 20:00:07 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:06.502 20:00:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58020 00:07:06.502 20:00:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:06.502 20:00:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:06.502 [ 00:07:06.502 "bdev_malloc_delete", 00:07:06.502 "bdev_malloc_create", 00:07:06.502 "bdev_null_resize", 00:07:06.502 "bdev_null_delete", 00:07:06.502 "bdev_null_create", 00:07:06.502 "bdev_nvme_cuse_unregister", 00:07:06.502 "bdev_nvme_cuse_register", 00:07:06.502 "bdev_opal_new_user", 00:07:06.502 "bdev_opal_set_lock_state", 00:07:06.502 "bdev_opal_delete", 00:07:06.502 "bdev_opal_get_info", 00:07:06.502 "bdev_opal_create", 00:07:06.502 "bdev_nvme_opal_revert", 00:07:06.502 "bdev_nvme_opal_init", 00:07:06.502 "bdev_nvme_send_cmd", 00:07:06.502 "bdev_nvme_set_keys", 00:07:06.502 "bdev_nvme_get_path_iostat", 00:07:06.502 "bdev_nvme_get_mdns_discovery_info", 00:07:06.502 "bdev_nvme_stop_mdns_discovery", 00:07:06.502 "bdev_nvme_start_mdns_discovery", 00:07:06.502 "bdev_nvme_set_multipath_policy", 00:07:06.502 "bdev_nvme_set_preferred_path", 00:07:06.502 "bdev_nvme_get_io_paths", 00:07:06.502 "bdev_nvme_remove_error_injection", 00:07:06.502 "bdev_nvme_add_error_injection", 00:07:06.502 "bdev_nvme_get_discovery_info", 00:07:06.502 "bdev_nvme_stop_discovery", 00:07:06.502 "bdev_nvme_start_discovery", 00:07:06.502 "bdev_nvme_get_controller_health_info", 00:07:06.502 "bdev_nvme_disable_controller", 00:07:06.502 "bdev_nvme_enable_controller", 00:07:06.502 "bdev_nvme_reset_controller", 00:07:06.502 "bdev_nvme_get_transport_statistics", 00:07:06.502 "bdev_nvme_apply_firmware", 00:07:06.502 "bdev_nvme_detach_controller", 00:07:06.502 "bdev_nvme_get_controllers", 00:07:06.502 "bdev_nvme_attach_controller", 00:07:06.502 "bdev_nvme_set_hotplug", 00:07:06.502 "bdev_nvme_set_options", 00:07:06.502 "bdev_passthru_delete", 00:07:06.502 "bdev_passthru_create", 00:07:06.502 "bdev_lvol_set_parent_bdev", 00:07:06.502 "bdev_lvol_set_parent", 00:07:06.502 "bdev_lvol_check_shallow_copy", 00:07:06.502 "bdev_lvol_start_shallow_copy", 00:07:06.502 "bdev_lvol_grow_lvstore", 00:07:06.502 "bdev_lvol_get_lvols", 00:07:06.502 "bdev_lvol_get_lvstores", 00:07:06.502 "bdev_lvol_delete", 00:07:06.502 "bdev_lvol_set_read_only", 00:07:06.502 "bdev_lvol_resize", 00:07:06.502 "bdev_lvol_decouple_parent", 00:07:06.502 "bdev_lvol_inflate", 00:07:06.502 "bdev_lvol_rename", 00:07:06.502 "bdev_lvol_clone_bdev", 00:07:06.502 "bdev_lvol_clone", 00:07:06.502 "bdev_lvol_snapshot", 00:07:06.502 "bdev_lvol_create", 00:07:06.502 "bdev_lvol_delete_lvstore", 00:07:06.502 "bdev_lvol_rename_lvstore", 00:07:06.502 "bdev_lvol_create_lvstore", 00:07:06.502 "bdev_raid_set_options", 00:07:06.502 "bdev_raid_remove_base_bdev", 00:07:06.502 "bdev_raid_add_base_bdev", 00:07:06.502 "bdev_raid_delete", 00:07:06.502 "bdev_raid_create", 00:07:06.503 "bdev_raid_get_bdevs", 00:07:06.503 "bdev_error_inject_error", 00:07:06.503 "bdev_error_delete", 00:07:06.503 "bdev_error_create", 00:07:06.503 "bdev_split_delete", 00:07:06.503 "bdev_split_create", 00:07:06.503 "bdev_delay_delete", 00:07:06.503 "bdev_delay_create", 00:07:06.503 "bdev_delay_update_latency", 00:07:06.503 "bdev_zone_block_delete", 00:07:06.503 "bdev_zone_block_create", 00:07:06.503 "blobfs_create", 00:07:06.503 "blobfs_detect", 00:07:06.503 "blobfs_set_cache_size", 00:07:06.503 "bdev_aio_delete", 00:07:06.503 "bdev_aio_rescan", 00:07:06.503 "bdev_aio_create", 00:07:06.503 "bdev_ftl_set_property", 00:07:06.503 "bdev_ftl_get_properties", 00:07:06.503 "bdev_ftl_get_stats", 00:07:06.503 "bdev_ftl_unmap", 00:07:06.503 "bdev_ftl_unload", 00:07:06.503 "bdev_ftl_delete", 00:07:06.503 "bdev_ftl_load", 00:07:06.503 "bdev_ftl_create", 00:07:06.503 "bdev_virtio_attach_controller", 00:07:06.503 "bdev_virtio_scsi_get_devices", 00:07:06.503 "bdev_virtio_detach_controller", 00:07:06.503 "bdev_virtio_blk_set_hotplug", 00:07:06.503 "bdev_iscsi_delete", 00:07:06.503 "bdev_iscsi_create", 00:07:06.503 "bdev_iscsi_set_options", 00:07:06.503 "accel_error_inject_error", 00:07:06.503 "ioat_scan_accel_module", 00:07:06.503 "dsa_scan_accel_module", 00:07:06.503 "iaa_scan_accel_module", 00:07:06.503 "keyring_file_remove_key", 00:07:06.503 "keyring_file_add_key", 00:07:06.503 "keyring_linux_set_options", 00:07:06.503 "fsdev_aio_delete", 00:07:06.503 "fsdev_aio_create", 00:07:06.503 "iscsi_get_histogram", 00:07:06.503 "iscsi_enable_histogram", 00:07:06.503 "iscsi_set_options", 00:07:06.503 "iscsi_get_auth_groups", 00:07:06.503 "iscsi_auth_group_remove_secret", 00:07:06.503 "iscsi_auth_group_add_secret", 00:07:06.503 "iscsi_delete_auth_group", 00:07:06.503 "iscsi_create_auth_group", 00:07:06.503 "iscsi_set_discovery_auth", 00:07:06.503 "iscsi_get_options", 00:07:06.503 "iscsi_target_node_request_logout", 00:07:06.503 "iscsi_target_node_set_redirect", 00:07:06.503 "iscsi_target_node_set_auth", 00:07:06.503 "iscsi_target_node_add_lun", 00:07:06.503 "iscsi_get_stats", 00:07:06.503 "iscsi_get_connections", 00:07:06.503 "iscsi_portal_group_set_auth", 00:07:06.503 "iscsi_start_portal_group", 00:07:06.503 "iscsi_delete_portal_group", 00:07:06.503 "iscsi_create_portal_group", 00:07:06.503 "iscsi_get_portal_groups", 00:07:06.503 "iscsi_delete_target_node", 00:07:06.503 "iscsi_target_node_remove_pg_ig_maps", 00:07:06.503 "iscsi_target_node_add_pg_ig_maps", 00:07:06.503 "iscsi_create_target_node", 00:07:06.503 "iscsi_get_target_nodes", 00:07:06.503 "iscsi_delete_initiator_group", 00:07:06.503 "iscsi_initiator_group_remove_initiators", 00:07:06.503 "iscsi_initiator_group_add_initiators", 00:07:06.503 "iscsi_create_initiator_group", 00:07:06.503 "iscsi_get_initiator_groups", 00:07:06.503 "nvmf_set_crdt", 00:07:06.503 "nvmf_set_config", 00:07:06.503 "nvmf_set_max_subsystems", 00:07:06.503 "nvmf_stop_mdns_prr", 00:07:06.503 "nvmf_publish_mdns_prr", 00:07:06.503 "nvmf_subsystem_get_listeners", 00:07:06.503 "nvmf_subsystem_get_qpairs", 00:07:06.503 "nvmf_subsystem_get_controllers", 00:07:06.503 "nvmf_get_stats", 00:07:06.503 "nvmf_get_transports", 00:07:06.503 "nvmf_create_transport", 00:07:06.503 "nvmf_get_targets", 00:07:06.503 "nvmf_delete_target", 00:07:06.503 "nvmf_create_target", 00:07:06.503 "nvmf_subsystem_allow_any_host", 00:07:06.503 "nvmf_subsystem_set_keys", 00:07:06.503 "nvmf_subsystem_remove_host", 00:07:06.503 "nvmf_subsystem_add_host", 00:07:06.503 "nvmf_ns_remove_host", 00:07:06.503 "nvmf_ns_add_host", 00:07:06.503 "nvmf_subsystem_remove_ns", 00:07:06.503 "nvmf_subsystem_set_ns_ana_group", 00:07:06.503 "nvmf_subsystem_add_ns", 00:07:06.503 "nvmf_subsystem_listener_set_ana_state", 00:07:06.503 "nvmf_discovery_get_referrals", 00:07:06.503 "nvmf_discovery_remove_referral", 00:07:06.503 "nvmf_discovery_add_referral", 00:07:06.503 "nvmf_subsystem_remove_listener", 00:07:06.503 "nvmf_subsystem_add_listener", 00:07:06.503 "nvmf_delete_subsystem", 00:07:06.503 "nvmf_create_subsystem", 00:07:06.503 "nvmf_get_subsystems", 00:07:06.503 "env_dpdk_get_mem_stats", 00:07:06.503 "nbd_get_disks", 00:07:06.503 "nbd_stop_disk", 00:07:06.503 "nbd_start_disk", 00:07:06.503 "ublk_recover_disk", 00:07:06.503 "ublk_get_disks", 00:07:06.503 "ublk_stop_disk", 00:07:06.503 "ublk_start_disk", 00:07:06.503 "ublk_destroy_target", 00:07:06.503 "ublk_create_target", 00:07:06.503 "virtio_blk_create_transport", 00:07:06.503 "virtio_blk_get_transports", 00:07:06.503 "vhost_controller_set_coalescing", 00:07:06.503 "vhost_get_controllers", 00:07:06.503 "vhost_delete_controller", 00:07:06.503 "vhost_create_blk_controller", 00:07:06.503 "vhost_scsi_controller_remove_target", 00:07:06.503 "vhost_scsi_controller_add_target", 00:07:06.503 "vhost_start_scsi_controller", 00:07:06.503 "vhost_create_scsi_controller", 00:07:06.503 "thread_set_cpumask", 00:07:06.503 "scheduler_set_options", 00:07:06.503 "framework_get_governor", 00:07:06.503 "framework_get_scheduler", 00:07:06.503 "framework_set_scheduler", 00:07:06.503 "framework_get_reactors", 00:07:06.503 "thread_get_io_channels", 00:07:06.503 "thread_get_pollers", 00:07:06.503 "thread_get_stats", 00:07:06.503 "framework_monitor_context_switch", 00:07:06.503 "spdk_kill_instance", 00:07:06.503 "log_enable_timestamps", 00:07:06.503 "log_get_flags", 00:07:06.503 "log_clear_flag", 00:07:06.503 "log_set_flag", 00:07:06.503 "log_get_level", 00:07:06.503 "log_set_level", 00:07:06.503 "log_get_print_level", 00:07:06.503 "log_set_print_level", 00:07:06.503 "framework_enable_cpumask_locks", 00:07:06.503 "framework_disable_cpumask_locks", 00:07:06.503 "framework_wait_init", 00:07:06.504 "framework_start_init", 00:07:06.504 "scsi_get_devices", 00:07:06.504 "bdev_get_histogram", 00:07:06.504 "bdev_enable_histogram", 00:07:06.504 "bdev_set_qos_limit", 00:07:06.504 "bdev_set_qd_sampling_period", 00:07:06.504 "bdev_get_bdevs", 00:07:06.504 "bdev_reset_iostat", 00:07:06.504 "bdev_get_iostat", 00:07:06.504 "bdev_examine", 00:07:06.504 "bdev_wait_for_examine", 00:07:06.504 "bdev_set_options", 00:07:06.504 "accel_get_stats", 00:07:06.504 "accel_set_options", 00:07:06.504 "accel_set_driver", 00:07:06.504 "accel_crypto_key_destroy", 00:07:06.504 "accel_crypto_keys_get", 00:07:06.504 "accel_crypto_key_create", 00:07:06.504 "accel_assign_opc", 00:07:06.504 "accel_get_module_info", 00:07:06.504 "accel_get_opc_assignments", 00:07:06.504 "vmd_rescan", 00:07:06.504 "vmd_remove_device", 00:07:06.504 "vmd_enable", 00:07:06.504 "sock_get_default_impl", 00:07:06.504 "sock_set_default_impl", 00:07:06.504 "sock_impl_set_options", 00:07:06.504 "sock_impl_get_options", 00:07:06.504 "iobuf_get_stats", 00:07:06.504 "iobuf_set_options", 00:07:06.504 "keyring_get_keys", 00:07:06.504 "framework_get_pci_devices", 00:07:06.504 "framework_get_config", 00:07:06.504 "framework_get_subsystems", 00:07:06.504 "fsdev_set_opts", 00:07:06.504 "fsdev_get_opts", 00:07:06.504 "trace_get_info", 00:07:06.504 "trace_get_tpoint_group_mask", 00:07:06.504 "trace_disable_tpoint_group", 00:07:06.504 "trace_enable_tpoint_group", 00:07:06.504 "trace_clear_tpoint_mask", 00:07:06.504 "trace_set_tpoint_mask", 00:07:06.504 "notify_get_notifications", 00:07:06.504 "notify_get_types", 00:07:06.504 "spdk_get_version", 00:07:06.504 "rpc_get_methods" 00:07:06.504 ] 00:07:06.504 20:00:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:06.504 20:00:07 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.504 20:00:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:06.504 20:00:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:06.504 20:00:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58002 00:07:06.504 20:00:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58002 ']' 00:07:06.504 20:00:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58002 00:07:06.768 20:00:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:06.768 20:00:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.768 20:00:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58002 00:07:06.768 20:00:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.768 20:00:07 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.768 20:00:07 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58002' 00:07:06.768 killing process with pid 58002 00:07:06.768 20:00:07 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58002 00:07:06.768 20:00:07 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58002 00:07:09.313 00:07:09.313 real 0m4.283s 00:07:09.313 user 0m7.684s 00:07:09.313 sys 0m0.653s 00:07:09.313 ************************************ 00:07:09.314 END TEST spdkcli_tcp 00:07:09.314 ************************************ 00:07:09.314 20:00:10 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.314 20:00:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:09.314 20:00:10 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:09.314 20:00:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.314 20:00:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.314 20:00:10 -- common/autotest_common.sh@10 -- # set +x 00:07:09.314 ************************************ 00:07:09.314 START TEST dpdk_mem_utility 00:07:09.314 ************************************ 00:07:09.314 20:00:10 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:09.314 * Looking for test storage... 00:07:09.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:09.314 20:00:10 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:09.314 20:00:10 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:07:09.314 20:00:10 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:09.314 20:00:10 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.314 20:00:10 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:09.314 20:00:10 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.314 20:00:10 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:09.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.314 --rc genhtml_branch_coverage=1 00:07:09.314 --rc genhtml_function_coverage=1 00:07:09.314 --rc genhtml_legend=1 00:07:09.314 --rc geninfo_all_blocks=1 00:07:09.314 --rc geninfo_unexecuted_blocks=1 00:07:09.314 00:07:09.314 ' 00:07:09.314 20:00:10 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:09.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.314 --rc genhtml_branch_coverage=1 00:07:09.314 --rc genhtml_function_coverage=1 00:07:09.314 --rc genhtml_legend=1 00:07:09.314 --rc geninfo_all_blocks=1 00:07:09.314 --rc geninfo_unexecuted_blocks=1 00:07:09.314 00:07:09.314 ' 00:07:09.314 20:00:10 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:09.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.314 --rc genhtml_branch_coverage=1 00:07:09.314 --rc genhtml_function_coverage=1 00:07:09.314 --rc genhtml_legend=1 00:07:09.314 --rc geninfo_all_blocks=1 00:07:09.314 --rc geninfo_unexecuted_blocks=1 00:07:09.314 00:07:09.314 ' 00:07:09.314 20:00:10 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:09.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.314 --rc genhtml_branch_coverage=1 00:07:09.314 --rc genhtml_function_coverage=1 00:07:09.314 --rc genhtml_legend=1 00:07:09.314 --rc geninfo_all_blocks=1 00:07:09.314 --rc geninfo_unexecuted_blocks=1 00:07:09.314 00:07:09.314 ' 00:07:09.314 20:00:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:09.314 20:00:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58124 00:07:09.314 20:00:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:09.314 20:00:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58124 00:07:09.314 20:00:10 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58124 ']' 00:07:09.314 20:00:10 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.314 20:00:10 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.314 20:00:10 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.314 20:00:10 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.314 20:00:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:09.574 [2024-12-05 20:00:10.811235] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:07:09.574 [2024-12-05 20:00:10.811454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58124 ] 00:07:09.574 [2024-12-05 20:00:10.988669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.834 [2024-12-05 20:00:11.102751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.773 20:00:11 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.773 20:00:11 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:10.773 20:00:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:10.773 20:00:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:10.773 20:00:11 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.773 20:00:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:10.773 { 00:07:10.773 "filename": "/tmp/spdk_mem_dump.txt" 00:07:10.773 } 00:07:10.773 20:00:12 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.773 20:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:10.773 DPDK memory size 824.000000 MiB in 1 heap(s) 00:07:10.773 1 heaps totaling size 824.000000 MiB 00:07:10.773 size: 824.000000 MiB heap id: 0 00:07:10.773 end heaps---------- 00:07:10.773 9 mempools totaling size 603.782043 MiB 00:07:10.773 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:10.773 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:10.773 size: 100.555481 MiB name: bdev_io_58124 00:07:10.773 size: 50.003479 MiB name: msgpool_58124 00:07:10.773 size: 36.509338 MiB name: fsdev_io_58124 00:07:10.773 size: 21.763794 MiB name: PDU_Pool 00:07:10.773 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:10.773 size: 4.133484 MiB name: evtpool_58124 00:07:10.773 size: 0.026123 MiB name: Session_Pool 00:07:10.773 end mempools------- 00:07:10.773 6 memzones totaling size 4.142822 MiB 00:07:10.773 size: 1.000366 MiB name: RG_ring_0_58124 00:07:10.773 size: 1.000366 MiB name: RG_ring_1_58124 00:07:10.773 size: 1.000366 MiB name: RG_ring_4_58124 00:07:10.773 size: 1.000366 MiB name: RG_ring_5_58124 00:07:10.773 size: 0.125366 MiB name: RG_ring_2_58124 00:07:10.773 size: 0.015991 MiB name: RG_ring_3_58124 00:07:10.773 end memzones------- 00:07:10.773 20:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:10.773 heap id: 0 total size: 824.000000 MiB number of busy elements: 309 number of free elements: 18 00:07:10.773 list of free elements. size: 16.782837 MiB 00:07:10.773 element at address: 0x200006400000 with size: 1.995972 MiB 00:07:10.773 element at address: 0x20000a600000 with size: 1.995972 MiB 00:07:10.773 element at address: 0x200003e00000 with size: 1.991028 MiB 00:07:10.773 element at address: 0x200019500040 with size: 0.999939 MiB 00:07:10.773 element at address: 0x200019900040 with size: 0.999939 MiB 00:07:10.773 element at address: 0x200019a00000 with size: 0.999084 MiB 00:07:10.773 element at address: 0x200032600000 with size: 0.994324 MiB 00:07:10.773 element at address: 0x200000400000 with size: 0.992004 MiB 00:07:10.773 element at address: 0x200019200000 with size: 0.959656 MiB 00:07:10.773 element at address: 0x200019d00040 with size: 0.936401 MiB 00:07:10.773 element at address: 0x200000200000 with size: 0.716980 MiB 00:07:10.773 element at address: 0x20001b400000 with size: 0.563660 MiB 00:07:10.773 element at address: 0x200000c00000 with size: 0.489197 MiB 00:07:10.773 element at address: 0x200019600000 with size: 0.488708 MiB 00:07:10.773 element at address: 0x200019e00000 with size: 0.485413 MiB 00:07:10.773 element at address: 0x200012c00000 with size: 0.433228 MiB 00:07:10.773 element at address: 0x200028800000 with size: 0.390442 MiB 00:07:10.773 element at address: 0x200000800000 with size: 0.350891 MiB 00:07:10.773 list of standard malloc elements. size: 199.286255 MiB 00:07:10.773 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:07:10.773 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:07:10.773 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:10.773 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:07:10.773 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:07:10.773 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:10.773 element at address: 0x200019deff40 with size: 0.062683 MiB 00:07:10.773 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:10.773 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:07:10.773 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:07:10.773 element at address: 0x200012bff040 with size: 0.000305 MiB 00:07:10.773 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:07:10.773 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:07:10.773 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:07:10.774 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200000cff000 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012bff180 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012bff280 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012bff380 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012bff480 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012bff580 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012bff680 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012bff780 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012bff880 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012bff980 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200019affc40 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:07:10.774 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:07:10.775 element at address: 0x200028863f40 with size: 0.000244 MiB 00:07:10.775 element at address: 0x200028864040 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886af80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886b080 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886b180 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886b280 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886b380 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886b480 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886b580 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886b680 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886b780 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886b880 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886b980 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886be80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886c080 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886c180 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886c280 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886c380 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886c480 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886c580 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886c680 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886c780 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886c880 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886c980 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886d080 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886d180 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886d280 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886d380 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886d480 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886d580 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886d680 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886d780 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886d880 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886d980 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886da80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886db80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886de80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886df80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886e080 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886e180 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886e280 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886e380 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886e480 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886e580 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886e680 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886e780 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886e880 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886e980 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886f080 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886f180 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886f280 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886f380 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886f480 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886f580 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886f680 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886f780 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886f880 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886f980 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:07:10.775 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:07:10.775 list of memzone associated elements. size: 607.930908 MiB 00:07:10.775 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:07:10.775 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:10.775 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:07:10.775 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:10.775 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:07:10.775 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58124_0 00:07:10.775 element at address: 0x200000dff340 with size: 48.003113 MiB 00:07:10.775 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58124_0 00:07:10.775 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:07:10.775 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58124_0 00:07:10.775 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:07:10.775 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:10.775 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:07:10.775 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:10.775 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:07:10.775 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58124_0 00:07:10.775 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:07:10.775 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58124 00:07:10.775 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:10.775 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58124 00:07:10.775 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:07:10.775 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:10.775 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:07:10.775 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:10.775 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:07:10.775 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:10.775 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:07:10.775 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:10.775 element at address: 0x200000cff100 with size: 1.000549 MiB 00:07:10.775 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58124 00:07:10.775 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:07:10.775 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58124 00:07:10.775 element at address: 0x200019affd40 with size: 1.000549 MiB 00:07:10.775 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58124 00:07:10.775 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:07:10.775 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58124 00:07:10.775 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:07:10.775 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58124 00:07:10.775 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:07:10.775 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58124 00:07:10.776 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:07:10.776 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:10.776 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:07:10.776 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:10.776 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:07:10.776 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:10.776 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:07:10.776 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58124 00:07:10.776 element at address: 0x20000085df80 with size: 0.125549 MiB 00:07:10.776 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58124 00:07:10.776 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:07:10.776 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:10.776 element at address: 0x200028864140 with size: 0.023804 MiB 00:07:10.776 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:10.776 element at address: 0x200000859d40 with size: 0.016174 MiB 00:07:10.776 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58124 00:07:10.776 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:07:10.776 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:10.776 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:07:10.776 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58124 00:07:10.776 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:07:10.776 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58124 00:07:10.776 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:07:10.776 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58124 00:07:10.776 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:07:10.776 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:10.776 20:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:10.776 20:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58124 00:07:10.776 20:00:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58124 ']' 00:07:10.776 20:00:12 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58124 00:07:10.776 20:00:12 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:10.776 20:00:12 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.776 20:00:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58124 00:07:10.776 killing process with pid 58124 00:07:10.776 20:00:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.776 20:00:12 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.776 20:00:12 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58124' 00:07:10.776 20:00:12 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58124 00:07:10.776 20:00:12 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58124 00:07:13.316 ************************************ 00:07:13.316 END TEST dpdk_mem_utility 00:07:13.316 ************************************ 00:07:13.316 00:07:13.316 real 0m4.070s 00:07:13.316 user 0m3.979s 00:07:13.316 sys 0m0.577s 00:07:13.316 20:00:14 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.317 20:00:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:13.317 20:00:14 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:13.317 20:00:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.317 20:00:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.317 20:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:13.317 ************************************ 00:07:13.317 START TEST event 00:07:13.317 ************************************ 00:07:13.317 20:00:14 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:13.317 * Looking for test storage... 00:07:13.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:13.317 20:00:14 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:13.317 20:00:14 event -- common/autotest_common.sh@1711 -- # lcov --version 00:07:13.317 20:00:14 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:13.576 20:00:14 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:13.577 20:00:14 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.577 20:00:14 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.577 20:00:14 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.577 20:00:14 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.577 20:00:14 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.577 20:00:14 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.577 20:00:14 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.577 20:00:14 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.577 20:00:14 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.577 20:00:14 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.577 20:00:14 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.577 20:00:14 event -- scripts/common.sh@344 -- # case "$op" in 00:07:13.577 20:00:14 event -- scripts/common.sh@345 -- # : 1 00:07:13.577 20:00:14 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.577 20:00:14 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.577 20:00:14 event -- scripts/common.sh@365 -- # decimal 1 00:07:13.577 20:00:14 event -- scripts/common.sh@353 -- # local d=1 00:07:13.577 20:00:14 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.577 20:00:14 event -- scripts/common.sh@355 -- # echo 1 00:07:13.577 20:00:14 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.577 20:00:14 event -- scripts/common.sh@366 -- # decimal 2 00:07:13.577 20:00:14 event -- scripts/common.sh@353 -- # local d=2 00:07:13.577 20:00:14 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.577 20:00:14 event -- scripts/common.sh@355 -- # echo 2 00:07:13.577 20:00:14 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.577 20:00:14 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.577 20:00:14 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.577 20:00:14 event -- scripts/common.sh@368 -- # return 0 00:07:13.577 20:00:14 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.577 20:00:14 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:13.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.577 --rc genhtml_branch_coverage=1 00:07:13.577 --rc genhtml_function_coverage=1 00:07:13.577 --rc genhtml_legend=1 00:07:13.577 --rc geninfo_all_blocks=1 00:07:13.577 --rc geninfo_unexecuted_blocks=1 00:07:13.577 00:07:13.577 ' 00:07:13.577 20:00:14 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:13.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.577 --rc genhtml_branch_coverage=1 00:07:13.577 --rc genhtml_function_coverage=1 00:07:13.577 --rc genhtml_legend=1 00:07:13.577 --rc geninfo_all_blocks=1 00:07:13.577 --rc geninfo_unexecuted_blocks=1 00:07:13.577 00:07:13.577 ' 00:07:13.577 20:00:14 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:13.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.577 --rc genhtml_branch_coverage=1 00:07:13.577 --rc genhtml_function_coverage=1 00:07:13.577 --rc genhtml_legend=1 00:07:13.577 --rc geninfo_all_blocks=1 00:07:13.577 --rc geninfo_unexecuted_blocks=1 00:07:13.577 00:07:13.577 ' 00:07:13.577 20:00:14 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:13.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.577 --rc genhtml_branch_coverage=1 00:07:13.577 --rc genhtml_function_coverage=1 00:07:13.577 --rc genhtml_legend=1 00:07:13.577 --rc geninfo_all_blocks=1 00:07:13.577 --rc geninfo_unexecuted_blocks=1 00:07:13.577 00:07:13.577 ' 00:07:13.577 20:00:14 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:13.577 20:00:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:13.577 20:00:14 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:13.577 20:00:14 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:13.577 20:00:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.577 20:00:14 event -- common/autotest_common.sh@10 -- # set +x 00:07:13.577 ************************************ 00:07:13.577 START TEST event_perf 00:07:13.577 ************************************ 00:07:13.577 20:00:14 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:13.577 Running I/O for 1 seconds...[2024-12-05 20:00:14.899955] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:07:13.577 [2024-12-05 20:00:14.900100] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58233 ] 00:07:13.835 [2024-12-05 20:00:15.074182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.835 [2024-12-05 20:00:15.205854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.835 [2024-12-05 20:00:15.205921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.835 [2024-12-05 20:00:15.205912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.835 Running I/O for 1 seconds...[2024-12-05 20:00:15.205871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.206 00:07:15.206 lcore 0: 189258 00:07:15.206 lcore 1: 189257 00:07:15.206 lcore 2: 189258 00:07:15.206 lcore 3: 189259 00:07:15.206 done. 00:07:15.206 00:07:15.206 real 0m1.643s 00:07:15.206 user 0m4.389s 00:07:15.206 sys 0m0.127s 00:07:15.206 20:00:16 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.206 20:00:16 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:15.206 ************************************ 00:07:15.206 END TEST event_perf 00:07:15.206 ************************************ 00:07:15.206 20:00:16 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:15.206 20:00:16 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:15.206 20:00:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.206 20:00:16 event -- common/autotest_common.sh@10 -- # set +x 00:07:15.206 ************************************ 00:07:15.206 START TEST event_reactor 00:07:15.206 ************************************ 00:07:15.206 20:00:16 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:15.207 [2024-12-05 20:00:16.611997] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:07:15.207 [2024-12-05 20:00:16.612214] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58272 ] 00:07:15.465 [2024-12-05 20:00:16.794416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.724 [2024-12-05 20:00:16.930909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.100 test_start 00:07:17.100 oneshot 00:07:17.100 tick 100 00:07:17.100 tick 100 00:07:17.100 tick 250 00:07:17.100 tick 100 00:07:17.100 tick 100 00:07:17.100 tick 100 00:07:17.100 tick 250 00:07:17.100 tick 500 00:07:17.100 tick 100 00:07:17.100 tick 100 00:07:17.100 tick 250 00:07:17.100 tick 100 00:07:17.100 tick 100 00:07:17.100 test_end 00:07:17.100 00:07:17.100 real 0m1.636s 00:07:17.100 user 0m1.416s 00:07:17.100 sys 0m0.109s 00:07:17.100 20:00:18 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.100 20:00:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:17.100 ************************************ 00:07:17.100 END TEST event_reactor 00:07:17.100 ************************************ 00:07:17.100 20:00:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:17.100 20:00:18 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:17.100 20:00:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.100 20:00:18 event -- common/autotest_common.sh@10 -- # set +x 00:07:17.100 ************************************ 00:07:17.100 START TEST event_reactor_perf 00:07:17.100 ************************************ 00:07:17.100 20:00:18 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:17.100 [2024-12-05 20:00:18.319962] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:07:17.100 [2024-12-05 20:00:18.320192] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58314 ] 00:07:17.100 [2024-12-05 20:00:18.502400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.359 [2024-12-05 20:00:18.661572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.779 test_start 00:07:18.779 test_end 00:07:18.779 Performance: 320537 events per second 00:07:18.779 00:07:18.779 real 0m1.667s 00:07:18.779 user 0m1.445s 00:07:18.779 sys 0m0.110s 00:07:18.779 ************************************ 00:07:18.779 END TEST event_reactor_perf 00:07:18.779 ************************************ 00:07:18.779 20:00:19 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.779 20:00:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:18.779 20:00:19 event -- event/event.sh@49 -- # uname -s 00:07:18.779 20:00:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:18.779 20:00:19 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:18.779 20:00:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.779 20:00:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.779 20:00:19 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.779 ************************************ 00:07:18.779 START TEST event_scheduler 00:07:18.779 ************************************ 00:07:18.779 20:00:20 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:18.779 * Looking for test storage... 00:07:18.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:18.779 20:00:20 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:18.779 20:00:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:07:18.779 20:00:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:19.039 20:00:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.039 20:00:20 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:19.039 20:00:20 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.039 20:00:20 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.039 --rc genhtml_branch_coverage=1 00:07:19.039 --rc genhtml_function_coverage=1 00:07:19.039 --rc genhtml_legend=1 00:07:19.039 --rc geninfo_all_blocks=1 00:07:19.039 --rc geninfo_unexecuted_blocks=1 00:07:19.039 00:07:19.039 ' 00:07:19.039 20:00:20 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.039 --rc genhtml_branch_coverage=1 00:07:19.039 --rc genhtml_function_coverage=1 00:07:19.039 --rc genhtml_legend=1 00:07:19.039 --rc geninfo_all_blocks=1 00:07:19.039 --rc geninfo_unexecuted_blocks=1 00:07:19.039 00:07:19.039 ' 00:07:19.039 20:00:20 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.039 --rc genhtml_branch_coverage=1 00:07:19.039 --rc genhtml_function_coverage=1 00:07:19.039 --rc genhtml_legend=1 00:07:19.039 --rc geninfo_all_blocks=1 00:07:19.039 --rc geninfo_unexecuted_blocks=1 00:07:19.039 00:07:19.039 ' 00:07:19.039 20:00:20 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.039 --rc genhtml_branch_coverage=1 00:07:19.039 --rc genhtml_function_coverage=1 00:07:19.039 --rc genhtml_legend=1 00:07:19.039 --rc geninfo_all_blocks=1 00:07:19.039 --rc geninfo_unexecuted_blocks=1 00:07:19.039 00:07:19.039 ' 00:07:19.039 20:00:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:19.039 20:00:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58390 00:07:19.039 20:00:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:19.039 20:00:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:19.039 20:00:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58390 00:07:19.039 20:00:20 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58390 ']' 00:07:19.039 20:00:20 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.039 20:00:20 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.039 20:00:20 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.039 20:00:20 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.039 20:00:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:19.039 [2024-12-05 20:00:20.344180] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:07:19.039 [2024-12-05 20:00:20.344480] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58390 ] 00:07:19.298 [2024-12-05 20:00:20.530347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.298 [2024-12-05 20:00:20.695558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.298 [2024-12-05 20:00:20.695711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.298 [2024-12-05 20:00:20.695793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.298 [2024-12-05 20:00:20.695752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.866 20:00:21 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.866 20:00:21 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:19.866 20:00:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:19.866 20:00:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.866 20:00:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:19.866 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:19.866 POWER: Cannot set governor of lcore 0 to userspace 00:07:19.866 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:19.866 POWER: Cannot set governor of lcore 0 to performance 00:07:19.866 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:19.866 POWER: Cannot set governor of lcore 0 to userspace 00:07:19.866 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:19.866 POWER: Cannot set governor of lcore 0 to userspace 00:07:19.866 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:19.866 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:19.866 POWER: Unable to set Power Management Environment for lcore 0 00:07:19.866 [2024-12-05 20:00:21.233414] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:07:19.866 [2024-12-05 20:00:21.233446] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:07:19.866 [2024-12-05 20:00:21.233459] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:19.866 [2024-12-05 20:00:21.233484] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:19.866 [2024-12-05 20:00:21.233495] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:19.866 [2024-12-05 20:00:21.233506] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:19.866 20:00:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.866 20:00:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:19.866 20:00:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.866 20:00:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:20.435 [2024-12-05 20:00:21.643433] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:20.435 20:00:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.435 20:00:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:20.435 20:00:21 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.435 20:00:21 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.435 20:00:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:20.435 ************************************ 00:07:20.435 START TEST scheduler_create_thread 00:07:20.435 ************************************ 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.435 2 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.435 3 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.435 4 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.435 5 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.435 6 00:07:20.435 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.436 7 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.436 8 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.436 9 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.436 10 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.436 20:00:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.814 20:00:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.814 20:00:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:21.814 20:00:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:21.814 20:00:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.814 20:00:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.752 20:00:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.752 20:00:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:22.752 20:00:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.752 20:00:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.687 20:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.687 20:00:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:23.687 20:00:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:23.687 20:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.687 20:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.252 ************************************ 00:07:24.252 END TEST scheduler_create_thread 00:07:24.252 ************************************ 00:07:24.252 20:00:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.252 00:07:24.252 real 0m3.889s 00:07:24.252 user 0m0.028s 00:07:24.252 sys 0m0.008s 00:07:24.252 20:00:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.252 20:00:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.252 20:00:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:24.252 20:00:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58390 00:07:24.252 20:00:25 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58390 ']' 00:07:24.252 20:00:25 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58390 00:07:24.252 20:00:25 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:24.252 20:00:25 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.252 20:00:25 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58390 00:07:24.252 killing process with pid 58390 00:07:24.252 20:00:25 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:24.252 20:00:25 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:24.252 20:00:25 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58390' 00:07:24.252 20:00:25 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58390 00:07:24.252 20:00:25 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58390 00:07:24.511 [2024-12-05 20:00:25.924954] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:25.887 00:07:25.887 real 0m7.202s 00:07:25.887 user 0m15.385s 00:07:25.887 sys 0m0.588s 00:07:25.887 ************************************ 00:07:25.887 END TEST event_scheduler 00:07:25.887 ************************************ 00:07:25.887 20:00:27 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.887 20:00:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:25.887 20:00:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:25.887 20:00:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:25.887 20:00:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.887 20:00:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.887 20:00:27 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.887 ************************************ 00:07:25.887 START TEST app_repeat 00:07:25.887 ************************************ 00:07:25.887 20:00:27 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:25.887 20:00:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.887 20:00:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.887 20:00:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:25.887 20:00:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:25.887 20:00:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:25.887 20:00:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:25.887 20:00:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:25.887 Process app_repeat pid: 58518 00:07:25.887 spdk_app_start Round 0 00:07:25.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:25.887 20:00:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58518 00:07:25.887 20:00:27 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:25.887 20:00:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:25.887 20:00:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58518' 00:07:25.887 20:00:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:25.887 20:00:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:25.887 20:00:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58518 /var/tmp/spdk-nbd.sock 00:07:25.887 20:00:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58518 ']' 00:07:25.887 20:00:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:25.887 20:00:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.887 20:00:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:25.887 20:00:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.887 20:00:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:25.887 [2024-12-05 20:00:27.320066] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:07:25.887 [2024-12-05 20:00:27.320213] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58518 ] 00:07:26.146 [2024-12-05 20:00:27.495881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.405 [2024-12-05 20:00:27.636074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.405 [2024-12-05 20:00:27.636088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.975 20:00:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.975 20:00:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:26.975 20:00:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:27.234 Malloc0 00:07:27.235 20:00:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:27.494 Malloc1 00:07:27.494 20:00:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:27.494 20:00:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.494 20:00:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:27.494 20:00:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:27.494 20:00:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.494 20:00:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:27.494 20:00:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:27.494 20:00:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.494 20:00:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:27.494 20:00:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:27.494 20:00:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.494 20:00:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:27.494 20:00:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:27.494 20:00:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:27.494 20:00:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:27.494 20:00:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:27.754 /dev/nbd0 00:07:27.754 20:00:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:27.754 20:00:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:27.754 20:00:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:27.754 20:00:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:27.754 20:00:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:27.754 20:00:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:27.754 20:00:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:27.754 20:00:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:27.754 20:00:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:27.754 20:00:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:27.754 20:00:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:27.754 1+0 records in 00:07:27.754 1+0 records out 00:07:27.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290293 s, 14.1 MB/s 00:07:27.754 20:00:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:27.754 20:00:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:27.754 20:00:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:27.754 20:00:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:27.754 20:00:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:27.754 20:00:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.754 20:00:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:27.754 20:00:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:28.014 /dev/nbd1 00:07:28.014 20:00:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:28.014 20:00:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:28.014 20:00:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:28.014 20:00:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:28.014 20:00:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:28.014 20:00:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:28.014 20:00:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:28.014 20:00:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:28.014 20:00:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:28.014 20:00:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:28.014 20:00:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:28.014 1+0 records in 00:07:28.014 1+0 records out 00:07:28.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393782 s, 10.4 MB/s 00:07:28.014 20:00:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:28.014 20:00:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:28.014 20:00:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:28.014 20:00:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:28.014 20:00:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:28.014 20:00:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.014 20:00:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:28.014 20:00:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:28.014 20:00:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.014 20:00:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:28.274 20:00:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:28.274 { 00:07:28.274 "nbd_device": "/dev/nbd0", 00:07:28.274 "bdev_name": "Malloc0" 00:07:28.274 }, 00:07:28.274 { 00:07:28.274 "nbd_device": "/dev/nbd1", 00:07:28.274 "bdev_name": "Malloc1" 00:07:28.274 } 00:07:28.274 ]' 00:07:28.274 20:00:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:28.274 { 00:07:28.274 "nbd_device": "/dev/nbd0", 00:07:28.274 "bdev_name": "Malloc0" 00:07:28.274 }, 00:07:28.274 { 00:07:28.274 "nbd_device": "/dev/nbd1", 00:07:28.274 "bdev_name": "Malloc1" 00:07:28.274 } 00:07:28.274 ]' 00:07:28.274 20:00:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:28.534 /dev/nbd1' 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:28.534 /dev/nbd1' 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:28.534 256+0 records in 00:07:28.534 256+0 records out 00:07:28.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132325 s, 79.2 MB/s 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:28.534 256+0 records in 00:07:28.534 256+0 records out 00:07:28.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253404 s, 41.4 MB/s 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:28.534 256+0 records in 00:07:28.534 256+0 records out 00:07:28.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265935 s, 39.4 MB/s 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.534 20:00:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:28.794 20:00:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:28.794 20:00:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:28.794 20:00:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:28.794 20:00:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.794 20:00:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.794 20:00:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:28.794 20:00:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:28.794 20:00:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.794 20:00:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.794 20:00:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:29.084 20:00:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:29.084 20:00:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:29.084 20:00:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:29.084 20:00:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.084 20:00:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.084 20:00:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:29.084 20:00:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:29.084 20:00:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.084 20:00:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:29.084 20:00:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.084 20:00:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:29.344 20:00:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:29.344 20:00:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:29.344 20:00:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:29.344 20:00:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:29.344 20:00:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:29.344 20:00:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:29.344 20:00:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:29.344 20:00:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:29.344 20:00:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:29.344 20:00:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:29.344 20:00:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:29.344 20:00:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:29.344 20:00:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:29.603 20:00:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:30.977 [2024-12-05 20:00:32.215880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.977 [2024-12-05 20:00:32.359048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.977 [2024-12-05 20:00:32.359060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.234 [2024-12-05 20:00:32.563226] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:31.234 [2024-12-05 20:00:32.563332] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:32.611 spdk_app_start Round 1 00:07:32.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:32.611 20:00:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:32.611 20:00:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:32.611 20:00:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58518 /var/tmp/spdk-nbd.sock 00:07:32.611 20:00:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58518 ']' 00:07:32.611 20:00:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:32.611 20:00:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.611 20:00:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:32.611 20:00:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.611 20:00:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:32.870 20:00:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.870 20:00:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:32.870 20:00:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:33.127 Malloc0 00:07:33.127 20:00:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:33.693 Malloc1 00:07:33.693 20:00:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:33.693 20:00:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.693 20:00:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:33.693 20:00:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:33.693 20:00:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.693 20:00:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:33.693 20:00:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:33.693 20:00:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.693 20:00:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:33.693 20:00:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:33.693 20:00:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.693 20:00:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:33.693 20:00:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:33.693 20:00:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:33.693 20:00:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:33.693 20:00:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:33.693 /dev/nbd0 00:07:33.693 20:00:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:33.693 20:00:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:33.693 20:00:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:33.693 20:00:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:33.693 20:00:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:33.693 20:00:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:33.693 20:00:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:33.950 20:00:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:33.950 20:00:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:33.950 20:00:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:33.950 20:00:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:33.950 1+0 records in 00:07:33.950 1+0 records out 00:07:33.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293548 s, 14.0 MB/s 00:07:33.950 20:00:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:33.950 20:00:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:33.950 20:00:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:33.950 20:00:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:33.950 20:00:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:33.950 20:00:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.950 20:00:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:33.950 20:00:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:33.950 /dev/nbd1 00:07:34.208 20:00:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:34.208 20:00:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:34.208 20:00:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:34.208 20:00:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:34.208 20:00:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:34.208 20:00:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:34.208 20:00:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:34.208 20:00:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:34.208 20:00:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:34.208 20:00:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:34.208 20:00:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:34.208 1+0 records in 00:07:34.208 1+0 records out 00:07:34.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298038 s, 13.7 MB/s 00:07:34.208 20:00:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:34.208 20:00:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:34.208 20:00:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:34.208 20:00:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:34.208 20:00:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:34.208 20:00:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.208 20:00:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.208 20:00:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:34.208 20:00:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.208 20:00:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:34.467 { 00:07:34.467 "nbd_device": "/dev/nbd0", 00:07:34.467 "bdev_name": "Malloc0" 00:07:34.467 }, 00:07:34.467 { 00:07:34.467 "nbd_device": "/dev/nbd1", 00:07:34.467 "bdev_name": "Malloc1" 00:07:34.467 } 00:07:34.467 ]' 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:34.467 { 00:07:34.467 "nbd_device": "/dev/nbd0", 00:07:34.467 "bdev_name": "Malloc0" 00:07:34.467 }, 00:07:34.467 { 00:07:34.467 "nbd_device": "/dev/nbd1", 00:07:34.467 "bdev_name": "Malloc1" 00:07:34.467 } 00:07:34.467 ]' 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:34.467 /dev/nbd1' 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:34.467 /dev/nbd1' 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:34.467 256+0 records in 00:07:34.467 256+0 records out 00:07:34.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119861 s, 87.5 MB/s 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:34.467 256+0 records in 00:07:34.467 256+0 records out 00:07:34.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020497 s, 51.2 MB/s 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:34.467 256+0 records in 00:07:34.467 256+0 records out 00:07:34.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269362 s, 38.9 MB/s 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:34.467 20:00:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:34.468 20:00:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:34.468 20:00:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:34.468 20:00:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:34.468 20:00:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:34.468 20:00:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:34.468 20:00:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:34.468 20:00:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:34.468 20:00:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:34.468 20:00:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:34.468 20:00:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.468 20:00:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.468 20:00:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:34.468 20:00:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:34.468 20:00:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.468 20:00:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:34.727 20:00:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:34.727 20:00:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:34.727 20:00:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:34.727 20:00:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.727 20:00:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.727 20:00:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:34.727 20:00:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:34.727 20:00:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.727 20:00:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.727 20:00:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:34.985 20:00:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:34.985 20:00:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:34.985 20:00:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:34.985 20:00:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.985 20:00:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.986 20:00:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:34.986 20:00:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:34.986 20:00:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.986 20:00:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:34.986 20:00:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.986 20:00:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:35.245 20:00:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:35.245 20:00:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:35.245 20:00:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:35.245 20:00:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:35.245 20:00:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:35.245 20:00:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:35.245 20:00:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:35.245 20:00:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:35.245 20:00:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:35.245 20:00:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:35.245 20:00:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:35.245 20:00:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:35.245 20:00:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:35.813 20:00:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:37.197 [2024-12-05 20:00:38.350656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:37.197 [2024-12-05 20:00:38.463494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.197 [2024-12-05 20:00:38.463517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.456 [2024-12-05 20:00:38.668010] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:37.456 [2024-12-05 20:00:38.668136] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:38.834 spdk_app_start Round 2 00:07:38.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:38.834 20:00:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:38.834 20:00:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:38.834 20:00:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58518 /var/tmp/spdk-nbd.sock 00:07:38.834 20:00:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58518 ']' 00:07:38.834 20:00:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:38.834 20:00:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.835 20:00:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:38.835 20:00:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.835 20:00:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:39.094 20:00:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.094 20:00:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:39.094 20:00:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:39.353 Malloc0 00:07:39.353 20:00:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:39.613 Malloc1 00:07:39.613 20:00:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:39.613 20:00:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.613 20:00:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:39.613 20:00:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:39.613 20:00:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.613 20:00:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:39.613 20:00:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:39.613 20:00:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.613 20:00:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:39.613 20:00:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:39.613 20:00:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.613 20:00:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:39.613 20:00:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:39.613 20:00:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:39.613 20:00:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.613 20:00:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:39.873 /dev/nbd0 00:07:39.873 20:00:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:39.873 20:00:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:39.873 20:00:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:39.873 20:00:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:39.873 20:00:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:39.873 20:00:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:39.873 20:00:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:39.873 20:00:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:39.873 20:00:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:39.873 20:00:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:39.873 20:00:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:39.873 1+0 records in 00:07:39.873 1+0 records out 00:07:39.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401685 s, 10.2 MB/s 00:07:39.873 20:00:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:39.873 20:00:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:39.873 20:00:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:39.873 20:00:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:39.873 20:00:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:39.873 20:00:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.873 20:00:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.873 20:00:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:40.134 /dev/nbd1 00:07:40.134 20:00:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:40.134 20:00:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:40.134 20:00:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:40.134 20:00:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:40.134 20:00:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.134 20:00:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.134 20:00:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:40.134 20:00:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:40.134 20:00:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.134 20:00:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.134 20:00:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:40.134 1+0 records in 00:07:40.134 1+0 records out 00:07:40.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365824 s, 11.2 MB/s 00:07:40.134 20:00:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:40.134 20:00:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:40.134 20:00:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:40.134 20:00:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.134 20:00:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:40.134 20:00:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.134 20:00:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:40.134 20:00:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:40.134 20:00:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.134 20:00:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.394 20:00:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:40.394 { 00:07:40.394 "nbd_device": "/dev/nbd0", 00:07:40.394 "bdev_name": "Malloc0" 00:07:40.394 }, 00:07:40.394 { 00:07:40.394 "nbd_device": "/dev/nbd1", 00:07:40.394 "bdev_name": "Malloc1" 00:07:40.394 } 00:07:40.394 ]' 00:07:40.394 20:00:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.394 20:00:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:40.394 { 00:07:40.394 "nbd_device": "/dev/nbd0", 00:07:40.394 "bdev_name": "Malloc0" 00:07:40.394 }, 00:07:40.394 { 00:07:40.394 "nbd_device": "/dev/nbd1", 00:07:40.394 "bdev_name": "Malloc1" 00:07:40.394 } 00:07:40.394 ]' 00:07:40.653 20:00:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:40.653 /dev/nbd1' 00:07:40.653 20:00:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:40.653 /dev/nbd1' 00:07:40.653 20:00:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.653 20:00:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:40.653 20:00:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:40.653 20:00:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:40.653 20:00:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:40.653 20:00:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:40.653 20:00:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:40.654 256+0 records in 00:07:40.654 256+0 records out 00:07:40.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00549586 s, 191 MB/s 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:40.654 256+0 records in 00:07:40.654 256+0 records out 00:07:40.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194335 s, 54.0 MB/s 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:40.654 256+0 records in 00:07:40.654 256+0 records out 00:07:40.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286422 s, 36.6 MB/s 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.654 20:00:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:40.913 20:00:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:40.913 20:00:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:40.913 20:00:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:40.913 20:00:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.913 20:00:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.913 20:00:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:40.913 20:00:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:40.913 20:00:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.913 20:00:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.913 20:00:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:41.173 20:00:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:41.173 20:00:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:41.173 20:00:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:41.173 20:00:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.173 20:00:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.173 20:00:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:41.173 20:00:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:41.173 20:00:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.173 20:00:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:41.173 20:00:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.173 20:00:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:41.434 20:00:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:41.434 20:00:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:41.434 20:00:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:41.434 20:00:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:41.434 20:00:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:41.434 20:00:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:41.434 20:00:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:41.434 20:00:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:41.434 20:00:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:41.434 20:00:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:41.434 20:00:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:41.434 20:00:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:41.434 20:00:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:42.003 20:00:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:42.941 [2024-12-05 20:00:44.354751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:43.201 [2024-12-05 20:00:44.477205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.201 [2024-12-05 20:00:44.477205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.461 [2024-12-05 20:00:44.697207] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:43.461 [2024-12-05 20:00:44.697267] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:44.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:44.834 20:00:46 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58518 /var/tmp/spdk-nbd.sock 00:07:44.834 20:00:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58518 ']' 00:07:44.834 20:00:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:44.834 20:00:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.834 20:00:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:44.834 20:00:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.834 20:00:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:45.091 20:00:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.091 20:00:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:45.091 20:00:46 event.app_repeat -- event/event.sh@39 -- # killprocess 58518 00:07:45.091 20:00:46 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58518 ']' 00:07:45.091 20:00:46 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58518 00:07:45.091 20:00:46 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:45.091 20:00:46 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.091 20:00:46 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58518 00:07:45.091 killing process with pid 58518 00:07:45.091 20:00:46 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.091 20:00:46 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.091 20:00:46 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58518' 00:07:45.091 20:00:46 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58518 00:07:45.091 20:00:46 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58518 00:07:46.526 spdk_app_start is called in Round 0. 00:07:46.526 Shutdown signal received, stop current app iteration 00:07:46.526 Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 reinitialization... 00:07:46.526 spdk_app_start is called in Round 1. 00:07:46.526 Shutdown signal received, stop current app iteration 00:07:46.526 Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 reinitialization... 00:07:46.526 spdk_app_start is called in Round 2. 00:07:46.526 Shutdown signal received, stop current app iteration 00:07:46.526 Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 reinitialization... 00:07:46.526 spdk_app_start is called in Round 3. 00:07:46.526 Shutdown signal received, stop current app iteration 00:07:46.526 ************************************ 00:07:46.526 END TEST app_repeat 00:07:46.526 ************************************ 00:07:46.526 20:00:47 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:46.526 20:00:47 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:46.526 00:07:46.526 real 0m20.319s 00:07:46.526 user 0m43.843s 00:07:46.526 sys 0m3.061s 00:07:46.526 20:00:47 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.526 20:00:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:46.526 20:00:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:46.526 20:00:47 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:46.526 20:00:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.526 20:00:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.526 20:00:47 event -- common/autotest_common.sh@10 -- # set +x 00:07:46.526 ************************************ 00:07:46.526 START TEST cpu_locks 00:07:46.526 ************************************ 00:07:46.526 20:00:47 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:46.526 * Looking for test storage... 00:07:46.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:46.526 20:00:47 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:46.526 20:00:47 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:07:46.526 20:00:47 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:46.526 20:00:47 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.526 20:00:47 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:46.526 20:00:47 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.526 20:00:47 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:46.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.526 --rc genhtml_branch_coverage=1 00:07:46.526 --rc genhtml_function_coverage=1 00:07:46.526 --rc genhtml_legend=1 00:07:46.526 --rc geninfo_all_blocks=1 00:07:46.526 --rc geninfo_unexecuted_blocks=1 00:07:46.526 00:07:46.526 ' 00:07:46.526 20:00:47 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:46.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.526 --rc genhtml_branch_coverage=1 00:07:46.526 --rc genhtml_function_coverage=1 00:07:46.526 --rc genhtml_legend=1 00:07:46.526 --rc geninfo_all_blocks=1 00:07:46.526 --rc geninfo_unexecuted_blocks=1 00:07:46.526 00:07:46.526 ' 00:07:46.526 20:00:47 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:46.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.526 --rc genhtml_branch_coverage=1 00:07:46.526 --rc genhtml_function_coverage=1 00:07:46.526 --rc genhtml_legend=1 00:07:46.526 --rc geninfo_all_blocks=1 00:07:46.526 --rc geninfo_unexecuted_blocks=1 00:07:46.526 00:07:46.526 ' 00:07:46.526 20:00:47 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:46.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.526 --rc genhtml_branch_coverage=1 00:07:46.526 --rc genhtml_function_coverage=1 00:07:46.526 --rc genhtml_legend=1 00:07:46.526 --rc geninfo_all_blocks=1 00:07:46.526 --rc geninfo_unexecuted_blocks=1 00:07:46.526 00:07:46.526 ' 00:07:46.526 20:00:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:46.526 20:00:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:46.526 20:00:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:46.526 20:00:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:46.526 20:00:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.526 20:00:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.526 20:00:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.526 ************************************ 00:07:46.526 START TEST default_locks 00:07:46.526 ************************************ 00:07:46.526 20:00:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:46.526 20:00:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58974 00:07:46.526 20:00:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:46.526 20:00:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58974 00:07:46.526 20:00:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58974 ']' 00:07:46.526 20:00:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.526 20:00:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.526 20:00:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.526 20:00:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.526 20:00:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.526 [2024-12-05 20:00:47.865714] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:07:46.526 [2024-12-05 20:00:47.865960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58974 ] 00:07:46.784 [2024-12-05 20:00:48.034922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.784 [2024-12-05 20:00:48.179546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.719 20:00:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.719 20:00:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:47.719 20:00:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58974 00:07:47.719 20:00:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58974 00:07:47.719 20:00:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:48.286 20:00:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58974 00:07:48.286 20:00:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58974 ']' 00:07:48.287 20:00:49 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58974 00:07:48.287 20:00:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:48.287 20:00:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.287 20:00:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58974 00:07:48.287 killing process with pid 58974 00:07:48.287 20:00:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.287 20:00:49 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.287 20:00:49 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58974' 00:07:48.287 20:00:49 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58974 00:07:48.287 20:00:49 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58974 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58974 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58974 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:50.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.815 ERROR: process (pid: 58974) is no longer running 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58974 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58974 ']' 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.815 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58974) - No such process 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:50.815 00:07:50.815 real 0m4.321s 00:07:50.815 user 0m4.308s 00:07:50.815 sys 0m0.706s 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.815 20:00:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.815 ************************************ 00:07:50.815 END TEST default_locks 00:07:50.815 ************************************ 00:07:50.815 20:00:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:50.815 20:00:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.815 20:00:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.815 20:00:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.815 ************************************ 00:07:50.815 START TEST default_locks_via_rpc 00:07:50.815 ************************************ 00:07:50.815 20:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:50.815 20:00:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59049 00:07:50.815 20:00:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59049 00:07:50.815 20:00:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:50.815 20:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59049 ']' 00:07:50.815 20:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.815 20:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.815 20:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.815 20:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.815 20:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.815 [2024-12-05 20:00:52.248605] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:07:50.815 [2024-12-05 20:00:52.248822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59049 ] 00:07:51.074 [2024-12-05 20:00:52.424470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.332 [2024-12-05 20:00:52.538386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.265 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.265 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:52.265 20:00:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:52.265 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.265 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.265 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.265 20:00:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:52.265 20:00:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:52.265 20:00:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:52.265 20:00:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:52.265 20:00:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:52.265 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.265 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.265 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.265 20:00:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59049 00:07:52.265 20:00:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59049 00:07:52.265 20:00:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:52.524 20:00:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59049 00:07:52.524 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59049 ']' 00:07:52.524 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59049 00:07:52.524 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:52.524 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.524 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59049 00:07:52.524 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.524 killing process with pid 59049 00:07:52.524 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.524 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59049' 00:07:52.524 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59049 00:07:52.524 20:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59049 00:07:55.056 00:07:55.056 real 0m4.227s 00:07:55.056 user 0m4.172s 00:07:55.056 sys 0m0.652s 00:07:55.056 20:00:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.056 ************************************ 00:07:55.056 END TEST default_locks_via_rpc 00:07:55.056 ************************************ 00:07:55.056 20:00:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.056 20:00:56 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:55.056 20:00:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.056 20:00:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.056 20:00:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.056 ************************************ 00:07:55.056 START TEST non_locking_app_on_locked_coremask 00:07:55.056 ************************************ 00:07:55.056 20:00:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:55.056 20:00:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59123 00:07:55.056 20:00:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:55.056 20:00:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59123 /var/tmp/spdk.sock 00:07:55.056 20:00:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59123 ']' 00:07:55.056 20:00:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.056 20:00:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.056 20:00:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.056 20:00:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.056 20:00:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.314 [2024-12-05 20:00:56.549076] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:07:55.314 [2024-12-05 20:00:56.549312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59123 ] 00:07:55.314 [2024-12-05 20:00:56.728557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.572 [2024-12-05 20:00:56.862842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.507 20:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.507 20:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:56.507 20:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59144 00:07:56.507 20:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59144 /var/tmp/spdk2.sock 00:07:56.507 20:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:56.507 20:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59144 ']' 00:07:56.507 20:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:56.507 20:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.507 20:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:56.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:56.507 20:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.507 20:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.765 [2024-12-05 20:00:57.996104] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:07:56.765 [2024-12-05 20:00:57.996331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59144 ] 00:07:56.765 [2024-12-05 20:00:58.165302] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:56.765 [2024-12-05 20:00:58.165354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.024 [2024-12-05 20:00:58.393440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.555 20:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.555 20:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:59.555 20:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59123 00:07:59.555 20:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59123 00:07:59.555 20:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:59.813 20:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59123 00:07:59.813 20:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59123 ']' 00:07:59.813 20:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59123 00:07:59.813 20:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:59.813 20:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.813 20:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59123 00:07:59.813 20:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.813 20:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.813 killing process with pid 59123 00:07:59.813 20:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59123' 00:07:59.813 20:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59123 00:07:59.813 20:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59123 00:08:05.104 20:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59144 00:08:05.104 20:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59144 ']' 00:08:05.104 20:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59144 00:08:05.104 20:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:05.104 20:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.104 20:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59144 00:08:05.104 killing process with pid 59144 00:08:05.104 20:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.104 20:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.104 20:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59144' 00:08:05.104 20:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59144 00:08:05.104 20:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59144 00:08:07.011 00:08:07.011 real 0m11.861s 00:08:07.011 user 0m12.170s 00:08:07.011 sys 0m1.214s 00:08:07.011 20:01:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.011 20:01:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.011 ************************************ 00:08:07.011 END TEST non_locking_app_on_locked_coremask 00:08:07.011 ************************************ 00:08:07.011 20:01:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:07.011 20:01:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.011 20:01:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.011 20:01:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:07.011 ************************************ 00:08:07.011 START TEST locking_app_on_unlocked_coremask 00:08:07.011 ************************************ 00:08:07.011 20:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:08:07.011 20:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59301 00:08:07.011 20:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:07.011 20:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59301 /var/tmp/spdk.sock 00:08:07.011 20:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59301 ']' 00:08:07.011 20:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.011 20:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.011 20:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.011 20:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.012 20:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.278 [2024-12-05 20:01:08.451688] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:07.278 [2024-12-05 20:01:08.451811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59301 ] 00:08:07.278 [2024-12-05 20:01:08.628501] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:07.278 [2024-12-05 20:01:08.628642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.536 [2024-12-05 20:01:08.746341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.472 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.472 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:08.472 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59318 00:08:08.472 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59318 /var/tmp/spdk2.sock 00:08:08.472 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:08.472 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59318 ']' 00:08:08.472 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:08.472 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.472 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:08.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:08.472 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.472 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.472 [2024-12-05 20:01:09.747503] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:08.472 [2024-12-05 20:01:09.747763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59318 ] 00:08:08.730 [2024-12-05 20:01:09.929388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.730 [2024-12-05 20:01:10.155752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.260 20:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.260 20:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:11.260 20:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59318 00:08:11.260 20:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59318 00:08:11.260 20:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:11.519 20:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59301 00:08:11.519 20:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59301 ']' 00:08:11.519 20:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59301 00:08:11.519 20:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:11.519 20:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.519 20:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59301 00:08:11.519 killing process with pid 59301 00:08:11.519 20:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.519 20:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.519 20:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59301' 00:08:11.519 20:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59301 00:08:11.519 20:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59301 00:08:16.785 20:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59318 00:08:16.785 20:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59318 ']' 00:08:16.785 20:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59318 00:08:16.785 20:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:16.785 20:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.785 20:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59318 00:08:16.785 killing process with pid 59318 00:08:16.785 20:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.785 20:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.785 20:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59318' 00:08:16.785 20:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59318 00:08:16.785 20:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59318 00:08:18.685 ************************************ 00:08:18.685 END TEST locking_app_on_unlocked_coremask 00:08:18.685 ************************************ 00:08:18.685 00:08:18.685 real 0m11.760s 00:08:18.685 user 0m12.028s 00:08:18.685 sys 0m1.260s 00:08:18.685 20:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.685 20:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:18.953 20:01:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:18.953 20:01:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.953 20:01:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.953 20:01:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:18.953 ************************************ 00:08:18.953 START TEST locking_app_on_locked_coremask 00:08:18.953 ************************************ 00:08:18.953 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:08:18.953 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59466 00:08:18.953 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:18.953 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59466 /var/tmp/spdk.sock 00:08:18.953 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59466 ']' 00:08:18.953 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.953 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.953 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.953 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.953 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:18.953 [2024-12-05 20:01:20.279536] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:18.954 [2024-12-05 20:01:20.279775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59466 ] 00:08:19.211 [2024-12-05 20:01:20.454413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.211 [2024-12-05 20:01:20.572663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59482 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59482 /var/tmp/spdk2.sock 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59482 /var/tmp/spdk2.sock 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59482 /var/tmp/spdk2.sock 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59482 ']' 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:20.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.153 20:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.153 [2024-12-05 20:01:21.577945] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:20.153 [2024-12-05 20:01:21.578204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59482 ] 00:08:20.412 [2024-12-05 20:01:21.754022] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59466 has claimed it. 00:08:20.412 [2024-12-05 20:01:21.754079] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:20.981 ERROR: process (pid: 59482) is no longer running 00:08:20.981 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59482) - No such process 00:08:20.981 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.981 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:20.981 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:20.981 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:20.981 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:20.981 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:20.981 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59466 00:08:20.981 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:20.981 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59466 00:08:21.240 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59466 00:08:21.240 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59466 ']' 00:08:21.240 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59466 00:08:21.240 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:21.240 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.240 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59466 00:08:21.240 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.240 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.240 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59466' 00:08:21.240 killing process with pid 59466 00:08:21.240 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59466 00:08:21.240 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59466 00:08:23.815 ************************************ 00:08:23.815 END TEST locking_app_on_locked_coremask 00:08:23.815 ************************************ 00:08:23.815 00:08:23.815 real 0m4.950s 00:08:23.815 user 0m5.127s 00:08:23.815 sys 0m0.835s 00:08:23.815 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.815 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:23.815 20:01:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:23.815 20:01:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.815 20:01:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.815 20:01:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:23.815 ************************************ 00:08:23.815 START TEST locking_overlapped_coremask 00:08:23.815 ************************************ 00:08:23.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.815 20:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:23.815 20:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59552 00:08:23.815 20:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59552 /var/tmp/spdk.sock 00:08:23.815 20:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59552 ']' 00:08:23.815 20:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.815 20:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.815 20:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.815 20:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.815 20:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:23.815 20:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:24.072 [2024-12-05 20:01:25.279436] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:24.072 [2024-12-05 20:01:25.279662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59552 ] 00:08:24.072 [2024-12-05 20:01:25.444232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:24.329 [2024-12-05 20:01:25.566662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.329 [2024-12-05 20:01:25.566801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.329 [2024-12-05 20:01:25.566840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59575 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59575 /var/tmp/spdk2.sock 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59575 /var/tmp/spdk2.sock 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59575 /var/tmp/spdk2.sock 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59575 ']' 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:25.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.261 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:25.261 [2024-12-05 20:01:26.569573] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:25.261 [2024-12-05 20:01:26.569686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59575 ] 00:08:25.521 [2024-12-05 20:01:26.742354] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59552 has claimed it. 00:08:25.521 [2024-12-05 20:01:26.742420] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:25.780 ERROR: process (pid: 59575) is no longer running 00:08:25.780 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59575) - No such process 00:08:25.780 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.780 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:25.780 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:25.780 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:25.780 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:25.780 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:25.780 20:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:25.780 20:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:25.780 20:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:25.780 20:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:25.780 20:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59552 00:08:25.780 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59552 ']' 00:08:25.780 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59552 00:08:25.780 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:25.780 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.780 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59552 00:08:26.039 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.039 killing process with pid 59552 00:08:26.039 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.039 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59552' 00:08:26.039 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59552 00:08:26.040 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59552 00:08:28.573 ************************************ 00:08:28.573 END TEST locking_overlapped_coremask 00:08:28.573 ************************************ 00:08:28.573 00:08:28.573 real 0m4.595s 00:08:28.573 user 0m12.540s 00:08:28.573 sys 0m0.572s 00:08:28.573 20:01:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.573 20:01:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:28.573 20:01:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:28.573 20:01:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.573 20:01:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.573 20:01:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:28.573 ************************************ 00:08:28.573 START TEST locking_overlapped_coremask_via_rpc 00:08:28.573 ************************************ 00:08:28.573 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:28.573 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:28.573 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59639 00:08:28.573 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59639 /var/tmp/spdk.sock 00:08:28.573 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59639 ']' 00:08:28.573 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.573 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.573 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.573 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.573 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.573 [2024-12-05 20:01:29.944045] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:28.573 [2024-12-05 20:01:29.944252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59639 ] 00:08:28.832 [2024-12-05 20:01:30.125024] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:28.832 [2024-12-05 20:01:30.125165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:28.832 [2024-12-05 20:01:30.254006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.832 [2024-12-05 20:01:30.254091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.832 [2024-12-05 20:01:30.254124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.770 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.770 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:29.770 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:29.770 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59663 00:08:29.770 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59663 /var/tmp/spdk2.sock 00:08:29.770 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59663 ']' 00:08:29.770 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:29.770 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.770 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:29.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:29.770 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.770 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.029 [2024-12-05 20:01:31.261621] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:30.029 [2024-12-05 20:01:31.261855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59663 ] 00:08:30.029 [2024-12-05 20:01:31.438111] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:30.029 [2024-12-05 20:01:31.438171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:30.287 [2024-12-05 20:01:31.690461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.287 [2024-12-05 20:01:31.694109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.287 [2024-12-05 20:01:31.694137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.838 [2024-12-05 20:01:33.946121] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59639 has claimed it. 00:08:32.838 request: 00:08:32.838 { 00:08:32.838 "method": "framework_enable_cpumask_locks", 00:08:32.838 "req_id": 1 00:08:32.838 } 00:08:32.838 Got JSON-RPC error response 00:08:32.838 response: 00:08:32.838 { 00:08:32.838 "code": -32603, 00:08:32.838 "message": "Failed to claim CPU core: 2" 00:08:32.838 } 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59639 /var/tmp/spdk.sock 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59639 ']' 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.838 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.838 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.838 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:32.838 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59663 /var/tmp/spdk2.sock 00:08:32.838 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59663 ']' 00:08:32.838 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:32.838 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.838 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:32.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:32.838 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.838 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.097 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.097 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:33.097 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:33.097 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:33.097 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:33.097 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:33.097 00:08:33.097 real 0m4.607s 00:08:33.097 user 0m1.458s 00:08:33.097 sys 0m0.197s 00:08:33.097 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.097 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.097 ************************************ 00:08:33.097 END TEST locking_overlapped_coremask_via_rpc 00:08:33.097 ************************************ 00:08:33.097 20:01:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:33.097 20:01:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59639 ]] 00:08:33.097 20:01:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59639 00:08:33.097 20:01:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59639 ']' 00:08:33.097 20:01:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59639 00:08:33.097 20:01:34 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:33.097 20:01:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.097 20:01:34 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59639 00:08:33.356 killing process with pid 59639 00:08:33.356 20:01:34 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.356 20:01:34 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.356 20:01:34 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59639' 00:08:33.356 20:01:34 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59639 00:08:33.356 20:01:34 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59639 00:08:35.885 20:01:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59663 ]] 00:08:35.885 20:01:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59663 00:08:35.885 20:01:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59663 ']' 00:08:35.885 20:01:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59663 00:08:35.885 20:01:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:35.885 20:01:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.885 20:01:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59663 00:08:35.885 killing process with pid 59663 00:08:35.885 20:01:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:35.885 20:01:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:35.885 20:01:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59663' 00:08:35.885 20:01:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59663 00:08:35.885 20:01:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59663 00:08:38.421 20:01:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:38.421 20:01:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:38.421 20:01:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59639 ]] 00:08:38.421 20:01:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59639 00:08:38.421 20:01:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59639 ']' 00:08:38.421 20:01:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59639 00:08:38.421 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59639) - No such process 00:08:38.421 20:01:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59639 is not found' 00:08:38.421 Process with pid 59639 is not found 00:08:38.421 20:01:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59663 ]] 00:08:38.421 20:01:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59663 00:08:38.421 20:01:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59663 ']' 00:08:38.421 20:01:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59663 00:08:38.421 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59663) - No such process 00:08:38.421 20:01:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59663 is not found' 00:08:38.421 Process with pid 59663 is not found 00:08:38.421 20:01:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:38.421 00:08:38.421 real 0m52.180s 00:08:38.421 user 1m30.334s 00:08:38.421 sys 0m6.602s 00:08:38.421 20:01:39 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.421 ************************************ 00:08:38.421 END TEST cpu_locks 00:08:38.421 ************************************ 00:08:38.421 20:01:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:38.421 ************************************ 00:08:38.421 END TEST event 00:08:38.421 ************************************ 00:08:38.421 00:08:38.421 real 1m25.240s 00:08:38.421 user 2m37.057s 00:08:38.421 sys 0m10.955s 00:08:38.421 20:01:39 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.421 20:01:39 event -- common/autotest_common.sh@10 -- # set +x 00:08:38.680 20:01:39 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:38.680 20:01:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.680 20:01:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.680 20:01:39 -- common/autotest_common.sh@10 -- # set +x 00:08:38.680 ************************************ 00:08:38.680 START TEST thread 00:08:38.680 ************************************ 00:08:38.680 20:01:39 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:38.680 * Looking for test storage... 00:08:38.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:38.680 20:01:40 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:38.680 20:01:40 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:38.680 20:01:40 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:08:38.680 20:01:40 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:38.680 20:01:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.680 20:01:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.680 20:01:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.680 20:01:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.680 20:01:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.680 20:01:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.680 20:01:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.680 20:01:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.680 20:01:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.680 20:01:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.680 20:01:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.680 20:01:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:38.680 20:01:40 thread -- scripts/common.sh@345 -- # : 1 00:08:38.939 20:01:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.939 20:01:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.939 20:01:40 thread -- scripts/common.sh@365 -- # decimal 1 00:08:38.939 20:01:40 thread -- scripts/common.sh@353 -- # local d=1 00:08:38.939 20:01:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.939 20:01:40 thread -- scripts/common.sh@355 -- # echo 1 00:08:38.939 20:01:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.939 20:01:40 thread -- scripts/common.sh@366 -- # decimal 2 00:08:38.939 20:01:40 thread -- scripts/common.sh@353 -- # local d=2 00:08:38.939 20:01:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.939 20:01:40 thread -- scripts/common.sh@355 -- # echo 2 00:08:38.939 20:01:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.939 20:01:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.939 20:01:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.939 20:01:40 thread -- scripts/common.sh@368 -- # return 0 00:08:38.939 20:01:40 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.939 20:01:40 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:38.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.939 --rc genhtml_branch_coverage=1 00:08:38.939 --rc genhtml_function_coverage=1 00:08:38.939 --rc genhtml_legend=1 00:08:38.939 --rc geninfo_all_blocks=1 00:08:38.939 --rc geninfo_unexecuted_blocks=1 00:08:38.939 00:08:38.939 ' 00:08:38.939 20:01:40 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:38.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.939 --rc genhtml_branch_coverage=1 00:08:38.939 --rc genhtml_function_coverage=1 00:08:38.939 --rc genhtml_legend=1 00:08:38.939 --rc geninfo_all_blocks=1 00:08:38.939 --rc geninfo_unexecuted_blocks=1 00:08:38.939 00:08:38.939 ' 00:08:38.939 20:01:40 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:38.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.939 --rc genhtml_branch_coverage=1 00:08:38.939 --rc genhtml_function_coverage=1 00:08:38.939 --rc genhtml_legend=1 00:08:38.939 --rc geninfo_all_blocks=1 00:08:38.939 --rc geninfo_unexecuted_blocks=1 00:08:38.939 00:08:38.939 ' 00:08:38.939 20:01:40 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:38.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.939 --rc genhtml_branch_coverage=1 00:08:38.939 --rc genhtml_function_coverage=1 00:08:38.939 --rc genhtml_legend=1 00:08:38.939 --rc geninfo_all_blocks=1 00:08:38.939 --rc geninfo_unexecuted_blocks=1 00:08:38.939 00:08:38.939 ' 00:08:38.939 20:01:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:38.939 20:01:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:38.939 20:01:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.939 20:01:40 thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.939 ************************************ 00:08:38.939 START TEST thread_poller_perf 00:08:38.939 ************************************ 00:08:38.939 20:01:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:38.939 [2024-12-05 20:01:40.190808] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:38.939 [2024-12-05 20:01:40.190982] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59859 ] 00:08:38.939 [2024-12-05 20:01:40.353722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.198 [2024-12-05 20:01:40.467054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.198 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:40.575 [2024-12-05T20:01:42.012Z] ====================================== 00:08:40.575 [2024-12-05T20:01:42.012Z] busy:2301079182 (cyc) 00:08:40.575 [2024-12-05T20:01:42.012Z] total_run_count: 396000 00:08:40.575 [2024-12-05T20:01:42.012Z] tsc_hz: 2290000000 (cyc) 00:08:40.575 [2024-12-05T20:01:42.012Z] ====================================== 00:08:40.575 [2024-12-05T20:01:42.012Z] poller_cost: 5810 (cyc), 2537 (nsec) 00:08:40.575 00:08:40.575 real 0m1.554s 00:08:40.575 user 0m1.361s 00:08:40.575 sys 0m0.086s 00:08:40.575 20:01:41 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.575 ************************************ 00:08:40.575 END TEST thread_poller_perf 00:08:40.575 ************************************ 00:08:40.575 20:01:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:40.575 20:01:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:40.575 20:01:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:40.575 20:01:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.575 20:01:41 thread -- common/autotest_common.sh@10 -- # set +x 00:08:40.575 ************************************ 00:08:40.575 START TEST thread_poller_perf 00:08:40.575 ************************************ 00:08:40.575 20:01:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:40.575 [2024-12-05 20:01:41.806881] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:40.575 [2024-12-05 20:01:41.807005] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59900 ] 00:08:40.575 [2024-12-05 20:01:41.982388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.834 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:40.834 [2024-12-05 20:01:42.096249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.209 [2024-12-05T20:01:43.646Z] ====================================== 00:08:42.209 [2024-12-05T20:01:43.646Z] busy:2294202680 (cyc) 00:08:42.209 [2024-12-05T20:01:43.646Z] total_run_count: 4655000 00:08:42.209 [2024-12-05T20:01:43.646Z] tsc_hz: 2290000000 (cyc) 00:08:42.209 [2024-12-05T20:01:43.646Z] ====================================== 00:08:42.209 [2024-12-05T20:01:43.646Z] poller_cost: 492 (cyc), 214 (nsec) 00:08:42.209 00:08:42.209 real 0m1.568s 00:08:42.209 user 0m1.368s 00:08:42.209 sys 0m0.093s 00:08:42.209 ************************************ 00:08:42.209 END TEST thread_poller_perf 00:08:42.209 ************************************ 00:08:42.209 20:01:43 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.209 20:01:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:42.209 20:01:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:42.209 ************************************ 00:08:42.209 END TEST thread 00:08:42.209 ************************************ 00:08:42.209 00:08:42.209 real 0m3.465s 00:08:42.209 user 0m2.897s 00:08:42.209 sys 0m0.366s 00:08:42.209 20:01:43 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.209 20:01:43 thread -- common/autotest_common.sh@10 -- # set +x 00:08:42.209 20:01:43 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:42.209 20:01:43 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:42.209 20:01:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.209 20:01:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.209 20:01:43 -- common/autotest_common.sh@10 -- # set +x 00:08:42.209 ************************************ 00:08:42.209 START TEST app_cmdline 00:08:42.209 ************************************ 00:08:42.209 20:01:43 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:42.209 * Looking for test storage... 00:08:42.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:42.209 20:01:43 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:42.209 20:01:43 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:08:42.209 20:01:43 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:42.209 20:01:43 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:42.209 20:01:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.209 20:01:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.209 20:01:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.209 20:01:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.209 20:01:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.209 20:01:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.209 20:01:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.209 20:01:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.209 20:01:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.209 20:01:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.209 20:01:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.209 20:01:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:42.209 20:01:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:42.209 20:01:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.209 20:01:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.467 20:01:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:42.467 20:01:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:42.467 20:01:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.467 20:01:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:42.467 20:01:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.467 20:01:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:42.467 20:01:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:42.467 20:01:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.467 20:01:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:42.467 20:01:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.467 20:01:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.467 20:01:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.467 20:01:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:42.467 20:01:43 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.467 20:01:43 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:42.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.467 --rc genhtml_branch_coverage=1 00:08:42.467 --rc genhtml_function_coverage=1 00:08:42.467 --rc genhtml_legend=1 00:08:42.467 --rc geninfo_all_blocks=1 00:08:42.467 --rc geninfo_unexecuted_blocks=1 00:08:42.467 00:08:42.467 ' 00:08:42.467 20:01:43 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:42.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.467 --rc genhtml_branch_coverage=1 00:08:42.467 --rc genhtml_function_coverage=1 00:08:42.467 --rc genhtml_legend=1 00:08:42.467 --rc geninfo_all_blocks=1 00:08:42.467 --rc geninfo_unexecuted_blocks=1 00:08:42.467 00:08:42.467 ' 00:08:42.467 20:01:43 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:42.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.467 --rc genhtml_branch_coverage=1 00:08:42.467 --rc genhtml_function_coverage=1 00:08:42.467 --rc genhtml_legend=1 00:08:42.467 --rc geninfo_all_blocks=1 00:08:42.467 --rc geninfo_unexecuted_blocks=1 00:08:42.467 00:08:42.467 ' 00:08:42.467 20:01:43 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:42.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.467 --rc genhtml_branch_coverage=1 00:08:42.467 --rc genhtml_function_coverage=1 00:08:42.467 --rc genhtml_legend=1 00:08:42.467 --rc geninfo_all_blocks=1 00:08:42.467 --rc geninfo_unexecuted_blocks=1 00:08:42.467 00:08:42.467 ' 00:08:42.467 20:01:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:42.467 20:01:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59989 00:08:42.467 20:01:43 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:42.467 20:01:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59989 00:08:42.467 20:01:43 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59989 ']' 00:08:42.467 20:01:43 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.467 20:01:43 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.467 20:01:43 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.467 20:01:43 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.467 20:01:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:42.467 [2024-12-05 20:01:43.763337] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:42.467 [2024-12-05 20:01:43.763535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59989 ] 00:08:42.724 [2024-12-05 20:01:43.931975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.724 [2024-12-05 20:01:44.041402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.657 20:01:44 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.657 20:01:44 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:43.657 20:01:44 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:43.657 { 00:08:43.657 "version": "SPDK v25.01-pre git sha1 a333974e5", 00:08:43.657 "fields": { 00:08:43.657 "major": 25, 00:08:43.657 "minor": 1, 00:08:43.657 "patch": 0, 00:08:43.657 "suffix": "-pre", 00:08:43.657 "commit": "a333974e5" 00:08:43.657 } 00:08:43.657 } 00:08:43.657 20:01:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:43.657 20:01:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:43.657 20:01:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:43.657 20:01:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:43.657 20:01:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:43.657 20:01:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:43.657 20:01:45 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.657 20:01:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:43.657 20:01:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:43.658 20:01:45 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.915 20:01:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:43.915 20:01:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:43.915 20:01:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:43.915 request: 00:08:43.915 { 00:08:43.915 "method": "env_dpdk_get_mem_stats", 00:08:43.915 "req_id": 1 00:08:43.915 } 00:08:43.915 Got JSON-RPC error response 00:08:43.915 response: 00:08:43.915 { 00:08:43.915 "code": -32601, 00:08:43.915 "message": "Method not found" 00:08:43.915 } 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:43.915 20:01:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59989 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59989 ']' 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59989 00:08:43.915 20:01:45 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:44.172 20:01:45 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.172 20:01:45 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59989 00:08:44.172 20:01:45 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.172 20:01:45 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.172 20:01:45 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59989' 00:08:44.172 killing process with pid 59989 00:08:44.172 20:01:45 app_cmdline -- common/autotest_common.sh@973 -- # kill 59989 00:08:44.172 20:01:45 app_cmdline -- common/autotest_common.sh@978 -- # wait 59989 00:08:46.701 00:08:46.701 real 0m4.307s 00:08:46.701 user 0m4.505s 00:08:46.701 sys 0m0.598s 00:08:46.701 20:01:47 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.701 20:01:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:46.701 ************************************ 00:08:46.701 END TEST app_cmdline 00:08:46.701 ************************************ 00:08:46.701 20:01:47 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:46.701 20:01:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.701 20:01:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.701 20:01:47 -- common/autotest_common.sh@10 -- # set +x 00:08:46.701 ************************************ 00:08:46.701 START TEST version 00:08:46.701 ************************************ 00:08:46.701 20:01:47 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:46.701 * Looking for test storage... 00:08:46.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:46.701 20:01:47 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:46.701 20:01:47 version -- common/autotest_common.sh@1711 -- # lcov --version 00:08:46.701 20:01:47 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:46.701 20:01:47 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:46.701 20:01:47 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.701 20:01:47 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.701 20:01:47 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.701 20:01:47 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.701 20:01:47 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.701 20:01:47 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.701 20:01:47 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.701 20:01:47 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.701 20:01:47 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.701 20:01:47 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.701 20:01:47 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.701 20:01:47 version -- scripts/common.sh@344 -- # case "$op" in 00:08:46.701 20:01:47 version -- scripts/common.sh@345 -- # : 1 00:08:46.701 20:01:47 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.701 20:01:47 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.701 20:01:47 version -- scripts/common.sh@365 -- # decimal 1 00:08:46.701 20:01:48 version -- scripts/common.sh@353 -- # local d=1 00:08:46.701 20:01:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.701 20:01:48 version -- scripts/common.sh@355 -- # echo 1 00:08:46.701 20:01:48 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.701 20:01:48 version -- scripts/common.sh@366 -- # decimal 2 00:08:46.701 20:01:48 version -- scripts/common.sh@353 -- # local d=2 00:08:46.701 20:01:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.701 20:01:48 version -- scripts/common.sh@355 -- # echo 2 00:08:46.701 20:01:48 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.701 20:01:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.701 20:01:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.701 20:01:48 version -- scripts/common.sh@368 -- # return 0 00:08:46.701 20:01:48 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.701 20:01:48 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:46.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.701 --rc genhtml_branch_coverage=1 00:08:46.701 --rc genhtml_function_coverage=1 00:08:46.701 --rc genhtml_legend=1 00:08:46.701 --rc geninfo_all_blocks=1 00:08:46.701 --rc geninfo_unexecuted_blocks=1 00:08:46.701 00:08:46.701 ' 00:08:46.701 20:01:48 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:46.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.701 --rc genhtml_branch_coverage=1 00:08:46.701 --rc genhtml_function_coverage=1 00:08:46.701 --rc genhtml_legend=1 00:08:46.701 --rc geninfo_all_blocks=1 00:08:46.701 --rc geninfo_unexecuted_blocks=1 00:08:46.701 00:08:46.701 ' 00:08:46.701 20:01:48 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:46.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.701 --rc genhtml_branch_coverage=1 00:08:46.701 --rc genhtml_function_coverage=1 00:08:46.701 --rc genhtml_legend=1 00:08:46.701 --rc geninfo_all_blocks=1 00:08:46.701 --rc geninfo_unexecuted_blocks=1 00:08:46.701 00:08:46.701 ' 00:08:46.701 20:01:48 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:46.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.701 --rc genhtml_branch_coverage=1 00:08:46.701 --rc genhtml_function_coverage=1 00:08:46.701 --rc genhtml_legend=1 00:08:46.701 --rc geninfo_all_blocks=1 00:08:46.701 --rc geninfo_unexecuted_blocks=1 00:08:46.701 00:08:46.701 ' 00:08:46.701 20:01:48 version -- app/version.sh@17 -- # get_header_version major 00:08:46.701 20:01:48 version -- app/version.sh@14 -- # tr -d '"' 00:08:46.701 20:01:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:46.701 20:01:48 version -- app/version.sh@14 -- # cut -f2 00:08:46.701 20:01:48 version -- app/version.sh@17 -- # major=25 00:08:46.701 20:01:48 version -- app/version.sh@18 -- # get_header_version minor 00:08:46.701 20:01:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:46.701 20:01:48 version -- app/version.sh@14 -- # cut -f2 00:08:46.701 20:01:48 version -- app/version.sh@14 -- # tr -d '"' 00:08:46.701 20:01:48 version -- app/version.sh@18 -- # minor=1 00:08:46.701 20:01:48 version -- app/version.sh@19 -- # get_header_version patch 00:08:46.701 20:01:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:46.701 20:01:48 version -- app/version.sh@14 -- # cut -f2 00:08:46.701 20:01:48 version -- app/version.sh@14 -- # tr -d '"' 00:08:46.701 20:01:48 version -- app/version.sh@19 -- # patch=0 00:08:46.701 20:01:48 version -- app/version.sh@20 -- # get_header_version suffix 00:08:46.701 20:01:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:46.701 20:01:48 version -- app/version.sh@14 -- # cut -f2 00:08:46.701 20:01:48 version -- app/version.sh@14 -- # tr -d '"' 00:08:46.701 20:01:48 version -- app/version.sh@20 -- # suffix=-pre 00:08:46.701 20:01:48 version -- app/version.sh@22 -- # version=25.1 00:08:46.701 20:01:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:46.701 20:01:48 version -- app/version.sh@28 -- # version=25.1rc0 00:08:46.701 20:01:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:46.701 20:01:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:46.701 20:01:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:46.701 20:01:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:46.701 ************************************ 00:08:46.701 END TEST version 00:08:46.701 ************************************ 00:08:46.701 00:08:46.701 real 0m0.282s 00:08:46.701 user 0m0.161s 00:08:46.701 sys 0m0.171s 00:08:46.701 20:01:48 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.702 20:01:48 version -- common/autotest_common.sh@10 -- # set +x 00:08:46.961 20:01:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:46.961 20:01:48 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:46.961 20:01:48 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:46.961 20:01:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.961 20:01:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.961 20:01:48 -- common/autotest_common.sh@10 -- # set +x 00:08:46.961 ************************************ 00:08:46.961 START TEST bdev_raid 00:08:46.961 ************************************ 00:08:46.961 20:01:48 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:46.961 * Looking for test storage... 00:08:46.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:46.961 20:01:48 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:46.961 20:01:48 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:08:46.961 20:01:48 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:46.961 20:01:48 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.961 20:01:48 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:46.961 20:01:48 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.961 20:01:48 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:46.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.961 --rc genhtml_branch_coverage=1 00:08:46.961 --rc genhtml_function_coverage=1 00:08:46.961 --rc genhtml_legend=1 00:08:46.961 --rc geninfo_all_blocks=1 00:08:46.961 --rc geninfo_unexecuted_blocks=1 00:08:46.961 00:08:46.961 ' 00:08:46.961 20:01:48 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:46.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.961 --rc genhtml_branch_coverage=1 00:08:46.961 --rc genhtml_function_coverage=1 00:08:46.961 --rc genhtml_legend=1 00:08:46.961 --rc geninfo_all_blocks=1 00:08:46.961 --rc geninfo_unexecuted_blocks=1 00:08:46.961 00:08:46.961 ' 00:08:46.961 20:01:48 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:46.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.961 --rc genhtml_branch_coverage=1 00:08:46.961 --rc genhtml_function_coverage=1 00:08:46.961 --rc genhtml_legend=1 00:08:46.961 --rc geninfo_all_blocks=1 00:08:46.961 --rc geninfo_unexecuted_blocks=1 00:08:46.961 00:08:46.961 ' 00:08:46.961 20:01:48 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:46.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.961 --rc genhtml_branch_coverage=1 00:08:46.961 --rc genhtml_function_coverage=1 00:08:46.961 --rc genhtml_legend=1 00:08:46.961 --rc geninfo_all_blocks=1 00:08:46.961 --rc geninfo_unexecuted_blocks=1 00:08:46.961 00:08:46.961 ' 00:08:46.961 20:01:48 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:46.961 20:01:48 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:46.961 20:01:48 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:46.961 20:01:48 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:46.961 20:01:48 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:46.961 20:01:48 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:46.961 20:01:48 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:46.961 20:01:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.961 20:01:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.961 20:01:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.219 ************************************ 00:08:47.219 START TEST raid1_resize_data_offset_test 00:08:47.219 ************************************ 00:08:47.219 20:01:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:08:47.219 20:01:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60171 00:08:47.219 20:01:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:47.219 20:01:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60171' 00:08:47.219 Process raid pid: 60171 00:08:47.219 20:01:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60171 00:08:47.219 20:01:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60171 ']' 00:08:47.219 20:01:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.219 20:01:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.219 20:01:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.219 20:01:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.219 20:01:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.219 [2024-12-05 20:01:48.491453] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:47.219 [2024-12-05 20:01:48.491659] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.476 [2024-12-05 20:01:48.664628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.476 [2024-12-05 20:01:48.780370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.733 [2024-12-05 20:01:48.981156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.733 [2024-12-05 20:01:48.981299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.991 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.991 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:08:47.992 20:01:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:47.992 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.992 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.992 malloc0 00:08:47.992 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.992 20:01:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:47.992 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.992 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.268 malloc1 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.268 null0 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.268 [2024-12-05 20:01:49.520859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:48.268 [2024-12-05 20:01:49.522673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:48.268 [2024-12-05 20:01:49.522724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:48.268 [2024-12-05 20:01:49.522860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:48.268 [2024-12-05 20:01:49.522874] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:48.268 [2024-12-05 20:01:49.523161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:48.268 [2024-12-05 20:01:49.523331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:48.268 [2024-12-05 20:01:49.523350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:48.268 [2024-12-05 20:01:49.523499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.268 [2024-12-05 20:01:49.576763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.268 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.833 malloc2 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.833 [2024-12-05 20:01:50.117421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:48.833 [2024-12-05 20:01:50.133949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.833 [2024-12-05 20:01:50.135664] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60171 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60171 ']' 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60171 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60171 00:08:48.833 killing process with pid 60171 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60171' 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60171 00:08:48.833 [2024-12-05 20:01:50.214166] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:48.833 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60171 00:08:48.833 [2024-12-05 20:01:50.215540] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:08:48.833 [2024-12-05 20:01:50.215603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.833 [2024-12-05 20:01:50.215620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:08:48.833 [2024-12-05 20:01:50.251565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.833 [2024-12-05 20:01:50.251875] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:48.833 [2024-12-05 20:01:50.251908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:50.734 [2024-12-05 20:01:52.051741] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:52.157 20:01:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:08:52.157 00:08:52.157 real 0m4.769s 00:08:52.157 user 0m4.699s 00:08:52.157 sys 0m0.515s 00:08:52.157 20:01:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.157 20:01:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.157 ************************************ 00:08:52.157 END TEST raid1_resize_data_offset_test 00:08:52.157 ************************************ 00:08:52.157 20:01:53 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:08:52.157 20:01:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:52.157 20:01:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.157 20:01:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.157 ************************************ 00:08:52.157 START TEST raid0_resize_superblock_test 00:08:52.157 ************************************ 00:08:52.157 20:01:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:08:52.157 20:01:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:08:52.157 20:01:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60260 00:08:52.157 20:01:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:52.157 20:01:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60260' 00:08:52.157 Process raid pid: 60260 00:08:52.157 20:01:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60260 00:08:52.157 20:01:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60260 ']' 00:08:52.157 20:01:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.157 20:01:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.157 20:01:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.157 20:01:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.157 20:01:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.157 [2024-12-05 20:01:53.327695] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:52.157 [2024-12-05 20:01:53.327905] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.157 [2024-12-05 20:01:53.498291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.415 [2024-12-05 20:01:53.613285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.415 [2024-12-05 20:01:53.810259] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.415 [2024-12-05 20:01:53.810388] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.982 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.982 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:52.982 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:52.982 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.982 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.241 malloc0 00:08:53.241 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.500 [2024-12-05 20:01:54.682809] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:53.500 [2024-12-05 20:01:54.682870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.500 [2024-12-05 20:01:54.682906] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:53.500 [2024-12-05 20:01:54.682918] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.500 [2024-12-05 20:01:54.685082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.500 [2024-12-05 20:01:54.685120] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:53.500 pt0 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.500 0915550c-4777-493b-ac4f-4cc42f0bd87f 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.500 c91ae6e0-1955-45b1-9b58-d1c5908849a9 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.500 5c26fbe2-8062-4ef8-8e04-ebf46dffc9b8 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.500 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.500 [2024-12-05 20:01:54.816538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c91ae6e0-1955-45b1-9b58-d1c5908849a9 is claimed 00:08:53.500 [2024-12-05 20:01:54.816626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5c26fbe2-8062-4ef8-8e04-ebf46dffc9b8 is claimed 00:08:53.501 [2024-12-05 20:01:54.816760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:53.501 [2024-12-05 20:01:54.816781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:08:53.501 [2024-12-05 20:01:54.817111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:53.501 [2024-12-05 20:01:54.817318] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:53.501 [2024-12-05 20:01:54.817337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:53.501 [2024-12-05 20:01:54.817498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:08:53.501 [2024-12-05 20:01:54.924542] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.501 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.801 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:53.801 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:53.801 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:08:53.801 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:53.801 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.801 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.801 [2024-12-05 20:01:54.972450] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:53.801 [2024-12-05 20:01:54.972538] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c91ae6e0-1955-45b1-9b58-d1c5908849a9' was resized: old size 131072, new size 204800 00:08:53.801 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.801 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:53.801 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.801 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.801 [2024-12-05 20:01:54.980348] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:53.801 [2024-12-05 20:01:54.980371] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '5c26fbe2-8062-4ef8-8e04-ebf46dffc9b8' was resized: old size 131072, new size 204800 00:08:53.801 [2024-12-05 20:01:54.980397] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:08:53.801 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.801 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:53.801 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:53.801 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.801 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.802 [2024-12-05 20:01:55.092288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.802 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.802 [2024-12-05 20:01:55.136008] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:53.803 [2024-12-05 20:01:55.136077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:53.803 [2024-12-05 20:01:55.136092] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.803 [2024-12-05 20:01:55.136105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:53.803 [2024-12-05 20:01:55.136222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.803 [2024-12-05 20:01:55.136255] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.803 [2024-12-05 20:01:55.136267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:53.803 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.803 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:53.803 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.803 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.803 [2024-12-05 20:01:55.143922] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:53.803 [2024-12-05 20:01:55.143971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.803 [2024-12-05 20:01:55.143992] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:53.803 [2024-12-05 20:01:55.144003] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.803 [2024-12-05 20:01:55.146148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.803 [2024-12-05 20:01:55.146186] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:53.803 [2024-12-05 20:01:55.147812] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c91ae6e0-1955-45b1-9b58-d1c5908849a9 00:08:53.803 [2024-12-05 20:01:55.147911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c91ae6e0-1955-45b1-9b58-d1c5908849a9 is claimed 00:08:53.803 [2024-12-05 20:01:55.148037] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 5c26fbe2-8062-4ef8-8e04-ebf46dffc9b8 00:08:53.803 [2024-12-05 20:01:55.148056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5c26fbe2-8062-4ef8-8e04-ebf46dffc9b8 is claimed 00:08:53.803 [2024-12-05 20:01:55.148201] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 5c26fbe2-8062-4ef8-8e04-ebf46dffc9b8 (2) smaller than existing raid bdev Raid (3) 00:08:53.803 [2024-12-05 20:01:55.148229] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev c91ae6e0-1955-45b1-9b58-d1c5908849a9: File exists 00:08:53.803 [2024-12-05 20:01:55.148267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:53.803 [2024-12-05 20:01:55.148295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:08:53.803 [2024-12-05 20:01:55.148574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:53.803 [2024-12-05 20:01:55.148742] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:53.803 [2024-12-05 20:01:55.148751] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:53.803 [2024-12-05 20:01:55.148961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.803 pt0 00:08:53.804 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.804 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:53.804 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.804 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.804 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.804 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:53.804 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:53.804 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.805 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:53.805 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:08:53.805 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.805 [2024-12-05 20:01:55.172464] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.805 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.805 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:53.805 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:53.805 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:08:53.805 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60260 00:08:53.805 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60260 ']' 00:08:53.805 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60260 00:08:53.805 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:53.805 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.805 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60260 00:08:54.063 killing process with pid 60260 00:08:54.063 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.063 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.063 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60260' 00:08:54.063 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60260 00:08:54.063 [2024-12-05 20:01:55.254816] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.063 [2024-12-05 20:01:55.254907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.063 [2024-12-05 20:01:55.254954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.063 [2024-12-05 20:01:55.254963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:54.063 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60260 00:08:55.438 [2024-12-05 20:01:56.680019] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.830 20:01:57 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:56.830 00:08:56.830 real 0m4.584s 00:08:56.830 user 0m4.813s 00:08:56.830 sys 0m0.554s 00:08:56.830 20:01:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.830 20:01:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.830 ************************************ 00:08:56.830 END TEST raid0_resize_superblock_test 00:08:56.830 ************************************ 00:08:56.830 20:01:57 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:08:56.830 20:01:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:56.830 20:01:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.830 20:01:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.830 ************************************ 00:08:56.830 START TEST raid1_resize_superblock_test 00:08:56.830 ************************************ 00:08:56.830 20:01:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:08:56.830 20:01:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:08:56.830 20:01:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60359 00:08:56.830 Process raid pid: 60359 00:08:56.830 20:01:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:56.830 20:01:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60359' 00:08:56.830 20:01:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60359 00:08:56.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.830 20:01:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60359 ']' 00:08:56.830 20:01:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.830 20:01:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.830 20:01:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.830 20:01:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.830 20:01:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.830 [2024-12-05 20:01:57.979210] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:56.830 [2024-12-05 20:01:57.979773] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.830 [2024-12-05 20:01:58.156521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.089 [2024-12-05 20:01:58.268718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.089 [2024-12-05 20:01:58.472151] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.089 [2024-12-05 20:01:58.472263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.656 20:01:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.656 20:01:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:57.656 20:01:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:57.656 20:01:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.656 20:01:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.915 malloc0 00:08:57.915 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.915 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:57.915 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.915 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.915 [2024-12-05 20:01:59.332750] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:57.915 [2024-12-05 20:01:59.332874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.915 [2024-12-05 20:01:59.332935] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:57.915 [2024-12-05 20:01:59.332982] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.915 [2024-12-05 20:01:59.335289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.915 [2024-12-05 20:01:59.335381] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:57.915 pt0 00:08:57.915 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.915 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:57.915 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.915 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.174 fa8747f5-1038-4bbf-966c-a38d899b03ad 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.174 ce86255f-00c1-484f-a0fe-783670172e95 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.174 e07a21fd-9e2d-4d01-8ab3-af7c45879875 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.174 [2024-12-05 20:01:59.467221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ce86255f-00c1-484f-a0fe-783670172e95 is claimed 00:08:58.174 [2024-12-05 20:01:59.467387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e07a21fd-9e2d-4d01-8ab3-af7c45879875 is claimed 00:08:58.174 [2024-12-05 20:01:59.467569] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:58.174 [2024-12-05 20:01:59.467619] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:08:58.174 [2024-12-05 20:01:59.467901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:58.174 [2024-12-05 20:01:59.468117] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:58.174 [2024-12-05 20:01:59.468173] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:58.174 [2024-12-05 20:01:59.468327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.174 [2024-12-05 20:01:59.555268] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.174 [2024-12-05 20:01:59.583169] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:58.174 [2024-12-05 20:01:59.583194] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ce86255f-00c1-484f-a0fe-783670172e95' was resized: old size 131072, new size 204800 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.174 [2024-12-05 20:01:59.595109] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:58.174 [2024-12-05 20:01:59.595173] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e07a21fd-9e2d-4d01-8ab3-af7c45879875' was resized: old size 131072, new size 204800 00:08:58.174 [2024-12-05 20:01:59.595203] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.174 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.433 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.433 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:58.433 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:58.433 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:58.433 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.433 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.433 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.433 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:58.433 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:58.433 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.434 [2024-12-05 20:01:59.711029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.434 [2024-12-05 20:01:59.754735] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:58.434 [2024-12-05 20:01:59.754863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:58.434 [2024-12-05 20:01:59.754924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:58.434 [2024-12-05 20:01:59.755090] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.434 [2024-12-05 20:01:59.755308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.434 [2024-12-05 20:01:59.755415] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.434 [2024-12-05 20:01:59.755470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.434 [2024-12-05 20:01:59.766649] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:58.434 [2024-12-05 20:01:59.766749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.434 [2024-12-05 20:01:59.766785] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:58.434 [2024-12-05 20:01:59.766817] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.434 [2024-12-05 20:01:59.768982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.434 [2024-12-05 20:01:59.769055] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:58.434 [2024-12-05 20:01:59.770701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ce86255f-00c1-484f-a0fe-783670172e95 00:08:58.434 [2024-12-05 20:01:59.770831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ce86255f-00c1-484f-a0fe-783670172e95 is claimed 00:08:58.434 [2024-12-05 20:01:59.771011] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e07a21fd-9e2d-4d01-8ab3-af7c45879875 00:08:58.434 [2024-12-05 20:01:59.771076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e07a21fd-9e2d-4d01-8ab3-af7c45879875 is claimed 00:08:58.434 [2024-12-05 20:01:59.771266] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev e07a21fd-9e2d-4d01-8ab3-af7c45879875 (2) smaller than existing raid bdev Raid (3) 00:08:58.434 [2024-12-05 20:01:59.771337] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev ce86255f-00c1-484f-a0fe-783670172e95: File exists 00:08:58.434 [2024-12-05 20:01:59.771417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:58.434 [2024-12-05 20:01:59.771455] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:58.434 [2024-12-05 20:01:59.771726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:58.434 pt0 00:08:58.434 [2024-12-05 20:01:59.771929] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:58.434 [2024-12-05 20:01:59.771940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:58.434 [2024-12-05 20:01:59.772085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.434 [2024-12-05 20:01:59.794969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60359 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60359 ']' 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60359 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.434 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60359 00:08:58.692 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.692 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.692 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60359' 00:08:58.692 killing process with pid 60359 00:08:58.693 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60359 00:08:58.693 [2024-12-05 20:01:59.878001] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:58.693 [2024-12-05 20:01:59.878126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.693 [2024-12-05 20:01:59.878210] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60359 00:08:58.693 ee all in destruct 00:08:58.693 [2024-12-05 20:01:59.878282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:09:00.069 [2024-12-05 20:02:01.267119] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.003 20:02:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:09:01.003 00:09:01.003 real 0m4.488s 00:09:01.003 user 0m4.674s 00:09:01.003 sys 0m0.578s 00:09:01.003 ************************************ 00:09:01.003 END TEST raid1_resize_superblock_test 00:09:01.003 ************************************ 00:09:01.003 20:02:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.003 20:02:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.003 20:02:02 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:09:01.263 20:02:02 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:09:01.263 20:02:02 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:09:01.263 20:02:02 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:09:01.263 20:02:02 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:09:01.263 20:02:02 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:09:01.263 20:02:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:01.263 20:02:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.263 20:02:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.263 ************************************ 00:09:01.263 START TEST raid_function_test_raid0 00:09:01.263 ************************************ 00:09:01.263 20:02:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:09:01.263 20:02:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:09:01.263 20:02:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:01.263 20:02:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:01.263 Process raid pid: 60456 00:09:01.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.263 20:02:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60456 00:09:01.263 20:02:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:01.263 20:02:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60456' 00:09:01.263 20:02:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60456 00:09:01.264 20:02:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60456 ']' 00:09:01.264 20:02:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.264 20:02:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.264 20:02:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.264 20:02:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.264 20:02:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:01.264 [2024-12-05 20:02:02.555468] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:01.264 [2024-12-05 20:02:02.555585] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.524 [2024-12-05 20:02:02.730555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.524 [2024-12-05 20:02:02.844103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.782 [2024-12-05 20:02:03.038129] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.782 [2024-12-05 20:02:03.038176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.040 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.040 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:09:02.040 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:09:02.040 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.040 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:02.040 Base_1 00:09:02.040 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.040 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:09:02.040 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.040 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:02.040 Base_2 00:09:02.040 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.041 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:09:02.041 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.041 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:02.041 [2024-12-05 20:02:03.450638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:02.041 [2024-12-05 20:02:03.452499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:02.041 [2024-12-05 20:02:03.452576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:02.041 [2024-12-05 20:02:03.452589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:02.041 [2024-12-05 20:02:03.452869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:02.041 [2024-12-05 20:02:03.453068] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:02.041 [2024-12-05 20:02:03.453080] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:09:02.041 [2024-12-05 20:02:03.453241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.041 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.041 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:02.041 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:09:02.041 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.041 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:02.041 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.299 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:09:02.299 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:09:02.299 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:09:02.299 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:02.299 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:02.299 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:02.299 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:02.299 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:02.299 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:09:02.299 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:02.299 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:02.299 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:09:02.299 [2024-12-05 20:02:03.694310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:02.299 /dev/nbd0 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:02.558 1+0 records in 00:09:02.558 1+0 records out 00:09:02.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042502 s, 9.6 MB/s 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:02.558 { 00:09:02.558 "nbd_device": "/dev/nbd0", 00:09:02.558 "bdev_name": "raid" 00:09:02.558 } 00:09:02.558 ]' 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:02.558 { 00:09:02.558 "nbd_device": "/dev/nbd0", 00:09:02.558 "bdev_name": "raid" 00:09:02.558 } 00:09:02.558 ]' 00:09:02.558 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:02.817 4096+0 records in 00:09:02.817 4096+0 records out 00:09:02.817 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0343165 s, 61.1 MB/s 00:09:02.817 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:03.076 4096+0 records in 00:09:03.076 4096+0 records out 00:09:03.076 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.198219 s, 10.6 MB/s 00:09:03.076 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:09:03.076 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:03.076 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:09:03.076 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:03.076 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:09:03.076 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:09:03.076 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:03.076 128+0 records in 00:09:03.077 128+0 records out 00:09:03.077 65536 bytes (66 kB, 64 KiB) copied, 0.00139966 s, 46.8 MB/s 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:03.077 2035+0 records in 00:09:03.077 2035+0 records out 00:09:03.077 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0135089 s, 77.1 MB/s 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:03.077 456+0 records in 00:09:03.077 456+0 records out 00:09:03.077 233472 bytes (233 kB, 228 KiB) copied, 0.00401109 s, 58.2 MB/s 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.077 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:03.336 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:03.336 [2024-12-05 20:02:04.634023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.336 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:03.336 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:03.336 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.336 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.336 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:03.336 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:09:03.336 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.336 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:03.336 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:03.336 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60456 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60456 ']' 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60456 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60456 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.594 killing process with pid 60456 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.594 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60456' 00:09:03.595 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60456 00:09:03.595 [2024-12-05 20:02:04.952265] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:03.595 [2024-12-05 20:02:04.952371] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.595 [2024-12-05 20:02:04.952424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.595 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60456 00:09:03.595 [2024-12-05 20:02:04.952441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:09:03.855 [2024-12-05 20:02:05.156484] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.230 20:02:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:09:05.230 00:09:05.230 real 0m3.818s 00:09:05.230 user 0m4.453s 00:09:05.230 sys 0m0.897s 00:09:05.230 20:02:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.230 20:02:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:05.230 ************************************ 00:09:05.230 END TEST raid_function_test_raid0 00:09:05.230 ************************************ 00:09:05.230 20:02:06 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:09:05.230 20:02:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:05.230 20:02:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.230 20:02:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.230 ************************************ 00:09:05.230 START TEST raid_function_test_concat 00:09:05.230 ************************************ 00:09:05.230 20:02:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:09:05.230 20:02:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:09:05.230 20:02:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:05.230 20:02:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:05.230 20:02:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60585 00:09:05.230 20:02:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:05.230 20:02:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60585' 00:09:05.230 Process raid pid: 60585 00:09:05.230 20:02:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60585 00:09:05.230 20:02:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60585 ']' 00:09:05.230 20:02:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.230 20:02:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.230 20:02:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.230 20:02:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.230 20:02:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:05.230 [2024-12-05 20:02:06.442012] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:05.230 [2024-12-05 20:02:06.442216] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.230 [2024-12-05 20:02:06.616555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.489 [2024-12-05 20:02:06.732238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.748 [2024-12-05 20:02:06.936586] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.748 [2024-12-05 20:02:06.936637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:06.007 Base_1 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:06.007 Base_2 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:06.007 [2024-12-05 20:02:07.365553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:06.007 [2024-12-05 20:02:07.367340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:06.007 [2024-12-05 20:02:07.367410] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:06.007 [2024-12-05 20:02:07.367423] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:06.007 [2024-12-05 20:02:07.367691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:06.007 [2024-12-05 20:02:07.367858] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:06.007 [2024-12-05 20:02:07.367867] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:09:06.007 [2024-12-05 20:02:07.368017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:06.007 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:09:06.266 [2024-12-05 20:02:07.613222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:06.266 /dev/nbd0 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:06.266 1+0 records in 00:09:06.266 1+0 records out 00:09:06.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600897 s, 6.8 MB/s 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:06.266 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:06.525 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:06.525 { 00:09:06.525 "nbd_device": "/dev/nbd0", 00:09:06.525 "bdev_name": "raid" 00:09:06.525 } 00:09:06.525 ]' 00:09:06.525 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:06.525 { 00:09:06.525 "nbd_device": "/dev/nbd0", 00:09:06.525 "bdev_name": "raid" 00:09:06.525 } 00:09:06.525 ]' 00:09:06.525 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:06.525 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:06.525 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:06.525 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:09:06.784 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:06.784 4096+0 records in 00:09:06.784 4096+0 records out 00:09:06.784 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0191825 s, 109 MB/s 00:09:06.784 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:06.784 4096+0 records in 00:09:06.784 4096+0 records out 00:09:06.784 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.199429 s, 10.5 MB/s 00:09:06.784 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:09:06.784 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:07.043 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:09:07.043 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:07.043 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:09:07.043 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:09:07.043 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:07.043 128+0 records in 00:09:07.043 128+0 records out 00:09:07.043 65536 bytes (66 kB, 64 KiB) copied, 0.00130391 s, 50.3 MB/s 00:09:07.043 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:07.043 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:07.043 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:07.043 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:07.043 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:07.043 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:09:07.043 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:09:07.043 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:07.043 2035+0 records in 00:09:07.043 2035+0 records out 00:09:07.043 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0136119 s, 76.5 MB/s 00:09:07.043 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:07.043 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:07.044 456+0 records in 00:09:07.044 456+0 records out 00:09:07.044 233472 bytes (233 kB, 228 KiB) copied, 0.00371283 s, 62.9 MB/s 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:07.044 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:07.303 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:07.303 [2024-12-05 20:02:08.595089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.303 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:07.303 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:07.303 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.303 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.303 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:07.303 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:09:07.303 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.303 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:07.303 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:07.303 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60585 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60585 ']' 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60585 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60585 00:09:07.560 killing process with pid 60585 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60585' 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60585 00:09:07.560 [2024-12-05 20:02:08.915781] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.560 [2024-12-05 20:02:08.915913] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.560 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60585 00:09:07.560 [2024-12-05 20:02:08.915976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.560 [2024-12-05 20:02:08.915990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:09:07.819 [2024-12-05 20:02:09.132435] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:09.191 20:02:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:09:09.191 00:09:09.191 real 0m3.906s 00:09:09.191 user 0m4.554s 00:09:09.191 sys 0m0.975s 00:09:09.191 20:02:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.191 20:02:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:09.191 ************************************ 00:09:09.191 END TEST raid_function_test_concat 00:09:09.191 ************************************ 00:09:09.191 20:02:10 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:09:09.191 20:02:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.191 20:02:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.191 20:02:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:09.191 ************************************ 00:09:09.191 START TEST raid0_resize_test 00:09:09.191 ************************************ 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:09.191 Process raid pid: 60713 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60713 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60713' 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60713 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60713 ']' 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.191 20:02:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.191 [2024-12-05 20:02:10.413525] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:09.191 [2024-12-05 20:02:10.413745] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.191 [2024-12-05 20:02:10.570814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.448 [2024-12-05 20:02:10.686467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.448 [2024-12-05 20:02:10.880808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.448 [2024-12-05 20:02:10.880876] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.029 Base_1 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.029 Base_2 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.029 [2024-12-05 20:02:11.281061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:10.029 [2024-12-05 20:02:11.282876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:10.029 [2024-12-05 20:02:11.282952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:10.029 [2024-12-05 20:02:11.282965] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:10.029 [2024-12-05 20:02:11.283260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:10.029 [2024-12-05 20:02:11.283395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:10.029 [2024-12-05 20:02:11.283403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:10.029 [2024-12-05 20:02:11.283579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.029 [2024-12-05 20:02:11.292991] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:10.029 [2024-12-05 20:02:11.293017] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:10.029 true 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.029 [2024-12-05 20:02:11.309141] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.029 [2024-12-05 20:02:11.352952] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:10.029 [2024-12-05 20:02:11.352991] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:10.029 [2024-12-05 20:02:11.353031] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:10.029 true 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.029 [2024-12-05 20:02:11.369075] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60713 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60713 ']' 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60713 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60713 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60713' 00:09:10.029 killing process with pid 60713 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60713 00:09:10.029 [2024-12-05 20:02:11.453617] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:10.029 [2024-12-05 20:02:11.453788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.029 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60713 00:09:10.029 [2024-12-05 20:02:11.453903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:10.029 [2024-12-05 20:02:11.453959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:10.287 [2024-12-05 20:02:11.472719] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:11.219 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:11.219 00:09:11.219 real 0m2.286s 00:09:11.219 user 0m2.426s 00:09:11.219 sys 0m0.349s 00:09:11.219 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.219 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.219 ************************************ 00:09:11.219 END TEST raid0_resize_test 00:09:11.219 ************************************ 00:09:11.477 20:02:12 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:09:11.477 20:02:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:11.478 20:02:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.478 20:02:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:11.478 ************************************ 00:09:11.478 START TEST raid1_resize_test 00:09:11.478 ************************************ 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:11.478 Process raid pid: 60769 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60769 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60769' 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60769 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60769 ']' 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.478 20:02:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.478 [2024-12-05 20:02:12.767034] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:11.478 [2024-12-05 20:02:12.767252] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.737 [2024-12-05 20:02:12.940923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.737 [2024-12-05 20:02:13.051903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.995 [2024-12-05 20:02:13.256399] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.995 [2024-12-05 20:02:13.256514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.254 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.254 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:09:12.254 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:12.254 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.254 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.254 Base_1 00:09:12.254 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.254 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:12.255 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.255 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.255 Base_2 00:09:12.255 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.255 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:09:12.255 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:12.255 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.255 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.255 [2024-12-05 20:02:13.628456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:12.255 [2024-12-05 20:02:13.630269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:12.255 [2024-12-05 20:02:13.630367] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:12.255 [2024-12-05 20:02:13.630407] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:12.255 [2024-12-05 20:02:13.630683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:12.255 [2024-12-05 20:02:13.630855] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:12.255 [2024-12-05 20:02:13.630907] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:12.255 [2024-12-05 20:02:13.631090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.255 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.255 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:12.255 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.255 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.255 [2024-12-05 20:02:13.640427] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:12.255 [2024-12-05 20:02:13.640500] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:12.255 true 00:09:12.255 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.255 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:12.255 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:12.255 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.255 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.255 [2024-12-05 20:02:13.656569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.255 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.513 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:09:12.513 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:09:12.513 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:09:12.513 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:09:12.513 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.514 [2024-12-05 20:02:13.700373] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:12.514 [2024-12-05 20:02:13.700470] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:12.514 [2024-12-05 20:02:13.700544] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:09:12.514 true 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.514 [2024-12-05 20:02:13.716478] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60769 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60769 ']' 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60769 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60769 00:09:12.514 killing process with pid 60769 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60769' 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60769 00:09:12.514 [2024-12-05 20:02:13.794497] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.514 [2024-12-05 20:02:13.794594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.514 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60769 00:09:12.514 [2024-12-05 20:02:13.795098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.514 [2024-12-05 20:02:13.795117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:12.514 [2024-12-05 20:02:13.813335] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.891 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:13.891 00:09:13.891 real 0m2.261s 00:09:13.891 user 0m2.410s 00:09:13.891 sys 0m0.327s 00:09:13.891 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.891 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.891 ************************************ 00:09:13.891 END TEST raid1_resize_test 00:09:13.891 ************************************ 00:09:13.891 20:02:14 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:13.891 20:02:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:13.891 20:02:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:13.891 20:02:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:13.891 20:02:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.891 20:02:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.891 ************************************ 00:09:13.891 START TEST raid_state_function_test 00:09:13.891 ************************************ 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:13.891 Process raid pid: 60826 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60826 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60826' 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60826 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60826 ']' 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.891 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.891 [2024-12-05 20:02:15.135886] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:13.891 [2024-12-05 20:02:15.136112] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.891 [2024-12-05 20:02:15.309318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.149 [2024-12-05 20:02:15.422068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.406 [2024-12-05 20:02:15.621383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.406 [2024-12-05 20:02:15.621513] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.665 [2024-12-05 20:02:15.959145] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.665 [2024-12-05 20:02:15.959262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.665 [2024-12-05 20:02:15.959295] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.665 [2024-12-05 20:02:15.959320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.665 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.665 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.665 "name": "Existed_Raid", 00:09:14.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.665 "strip_size_kb": 64, 00:09:14.665 "state": "configuring", 00:09:14.665 "raid_level": "raid0", 00:09:14.665 "superblock": false, 00:09:14.665 "num_base_bdevs": 2, 00:09:14.665 "num_base_bdevs_discovered": 0, 00:09:14.665 "num_base_bdevs_operational": 2, 00:09:14.665 "base_bdevs_list": [ 00:09:14.665 { 00:09:14.665 "name": "BaseBdev1", 00:09:14.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.665 "is_configured": false, 00:09:14.665 "data_offset": 0, 00:09:14.665 "data_size": 0 00:09:14.665 }, 00:09:14.665 { 00:09:14.665 "name": "BaseBdev2", 00:09:14.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.665 "is_configured": false, 00:09:14.665 "data_offset": 0, 00:09:14.665 "data_size": 0 00:09:14.665 } 00:09:14.665 ] 00:09:14.665 }' 00:09:14.665 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.665 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.924 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.924 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.924 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.924 [2024-12-05 20:02:16.358420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.924 [2024-12-05 20:02:16.358457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.183 [2024-12-05 20:02:16.370385] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:15.183 [2024-12-05 20:02:16.370476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:15.183 [2024-12-05 20:02:16.370489] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.183 [2024-12-05 20:02:16.370502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.183 [2024-12-05 20:02:16.418546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.183 BaseBdev1 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.183 [ 00:09:15.183 { 00:09:15.183 "name": "BaseBdev1", 00:09:15.183 "aliases": [ 00:09:15.183 "2fc59feb-ff3f-4d62-8a6a-d265ce6f77cc" 00:09:15.183 ], 00:09:15.183 "product_name": "Malloc disk", 00:09:15.183 "block_size": 512, 00:09:15.183 "num_blocks": 65536, 00:09:15.183 "uuid": "2fc59feb-ff3f-4d62-8a6a-d265ce6f77cc", 00:09:15.183 "assigned_rate_limits": { 00:09:15.183 "rw_ios_per_sec": 0, 00:09:15.183 "rw_mbytes_per_sec": 0, 00:09:15.183 "r_mbytes_per_sec": 0, 00:09:15.183 "w_mbytes_per_sec": 0 00:09:15.183 }, 00:09:15.183 "claimed": true, 00:09:15.183 "claim_type": "exclusive_write", 00:09:15.183 "zoned": false, 00:09:15.183 "supported_io_types": { 00:09:15.183 "read": true, 00:09:15.183 "write": true, 00:09:15.183 "unmap": true, 00:09:15.183 "flush": true, 00:09:15.183 "reset": true, 00:09:15.183 "nvme_admin": false, 00:09:15.183 "nvme_io": false, 00:09:15.183 "nvme_io_md": false, 00:09:15.183 "write_zeroes": true, 00:09:15.183 "zcopy": true, 00:09:15.183 "get_zone_info": false, 00:09:15.183 "zone_management": false, 00:09:15.183 "zone_append": false, 00:09:15.183 "compare": false, 00:09:15.183 "compare_and_write": false, 00:09:15.183 "abort": true, 00:09:15.183 "seek_hole": false, 00:09:15.183 "seek_data": false, 00:09:15.183 "copy": true, 00:09:15.183 "nvme_iov_md": false 00:09:15.183 }, 00:09:15.183 "memory_domains": [ 00:09:15.183 { 00:09:15.183 "dma_device_id": "system", 00:09:15.183 "dma_device_type": 1 00:09:15.183 }, 00:09:15.183 { 00:09:15.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.183 "dma_device_type": 2 00:09:15.183 } 00:09:15.183 ], 00:09:15.183 "driver_specific": {} 00:09:15.183 } 00:09:15.183 ] 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.183 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.183 "name": "Existed_Raid", 00:09:15.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.183 "strip_size_kb": 64, 00:09:15.184 "state": "configuring", 00:09:15.184 "raid_level": "raid0", 00:09:15.184 "superblock": false, 00:09:15.184 "num_base_bdevs": 2, 00:09:15.184 "num_base_bdevs_discovered": 1, 00:09:15.184 "num_base_bdevs_operational": 2, 00:09:15.184 "base_bdevs_list": [ 00:09:15.184 { 00:09:15.184 "name": "BaseBdev1", 00:09:15.184 "uuid": "2fc59feb-ff3f-4d62-8a6a-d265ce6f77cc", 00:09:15.184 "is_configured": true, 00:09:15.184 "data_offset": 0, 00:09:15.184 "data_size": 65536 00:09:15.184 }, 00:09:15.184 { 00:09:15.184 "name": "BaseBdev2", 00:09:15.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.184 "is_configured": false, 00:09:15.184 "data_offset": 0, 00:09:15.184 "data_size": 0 00:09:15.184 } 00:09:15.184 ] 00:09:15.184 }' 00:09:15.184 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.184 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.442 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:15.442 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.442 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.700 [2024-12-05 20:02:16.881853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.700 [2024-12-05 20:02:16.881990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:15.700 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.700 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:15.700 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.700 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.700 [2024-12-05 20:02:16.893842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.700 [2024-12-05 20:02:16.895688] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.700 [2024-12-05 20:02:16.895764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.700 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.700 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:15.700 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.700 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:15.700 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.700 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.700 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.700 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.700 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.700 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.700 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.701 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.701 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.701 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.701 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.701 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.701 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.701 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.701 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.701 "name": "Existed_Raid", 00:09:15.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.701 "strip_size_kb": 64, 00:09:15.701 "state": "configuring", 00:09:15.701 "raid_level": "raid0", 00:09:15.701 "superblock": false, 00:09:15.701 "num_base_bdevs": 2, 00:09:15.701 "num_base_bdevs_discovered": 1, 00:09:15.701 "num_base_bdevs_operational": 2, 00:09:15.701 "base_bdevs_list": [ 00:09:15.701 { 00:09:15.701 "name": "BaseBdev1", 00:09:15.701 "uuid": "2fc59feb-ff3f-4d62-8a6a-d265ce6f77cc", 00:09:15.701 "is_configured": true, 00:09:15.701 "data_offset": 0, 00:09:15.701 "data_size": 65536 00:09:15.701 }, 00:09:15.701 { 00:09:15.701 "name": "BaseBdev2", 00:09:15.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.701 "is_configured": false, 00:09:15.701 "data_offset": 0, 00:09:15.701 "data_size": 0 00:09:15.701 } 00:09:15.701 ] 00:09:15.701 }' 00:09:15.701 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.701 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.959 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:15.959 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.959 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.959 [2024-12-05 20:02:17.366919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.959 [2024-12-05 20:02:17.366973] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:15.960 [2024-12-05 20:02:17.366982] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:15.960 [2024-12-05 20:02:17.367268] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:15.960 [2024-12-05 20:02:17.367452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:15.960 [2024-12-05 20:02:17.367465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:15.960 [2024-12-05 20:02:17.367752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.960 BaseBdev2 00:09:15.960 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.960 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:15.960 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:15.960 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.960 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:15.960 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.960 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.960 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:15.960 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.960 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.960 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.960 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:15.960 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.960 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.960 [ 00:09:15.960 { 00:09:15.960 "name": "BaseBdev2", 00:09:15.960 "aliases": [ 00:09:15.960 "13009cd4-e3d7-4a8d-a39f-3df191c29b81" 00:09:15.960 ], 00:09:15.960 "product_name": "Malloc disk", 00:09:15.960 "block_size": 512, 00:09:15.960 "num_blocks": 65536, 00:09:15.960 "uuid": "13009cd4-e3d7-4a8d-a39f-3df191c29b81", 00:09:15.960 "assigned_rate_limits": { 00:09:15.960 "rw_ios_per_sec": 0, 00:09:16.218 "rw_mbytes_per_sec": 0, 00:09:16.218 "r_mbytes_per_sec": 0, 00:09:16.218 "w_mbytes_per_sec": 0 00:09:16.218 }, 00:09:16.218 "claimed": true, 00:09:16.218 "claim_type": "exclusive_write", 00:09:16.218 "zoned": false, 00:09:16.218 "supported_io_types": { 00:09:16.218 "read": true, 00:09:16.218 "write": true, 00:09:16.218 "unmap": true, 00:09:16.218 "flush": true, 00:09:16.218 "reset": true, 00:09:16.218 "nvme_admin": false, 00:09:16.218 "nvme_io": false, 00:09:16.218 "nvme_io_md": false, 00:09:16.218 "write_zeroes": true, 00:09:16.218 "zcopy": true, 00:09:16.218 "get_zone_info": false, 00:09:16.218 "zone_management": false, 00:09:16.218 "zone_append": false, 00:09:16.218 "compare": false, 00:09:16.218 "compare_and_write": false, 00:09:16.218 "abort": true, 00:09:16.218 "seek_hole": false, 00:09:16.218 "seek_data": false, 00:09:16.218 "copy": true, 00:09:16.218 "nvme_iov_md": false 00:09:16.218 }, 00:09:16.218 "memory_domains": [ 00:09:16.218 { 00:09:16.218 "dma_device_id": "system", 00:09:16.218 "dma_device_type": 1 00:09:16.218 }, 00:09:16.218 { 00:09:16.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.218 "dma_device_type": 2 00:09:16.218 } 00:09:16.218 ], 00:09:16.218 "driver_specific": {} 00:09:16.218 } 00:09:16.218 ] 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.219 "name": "Existed_Raid", 00:09:16.219 "uuid": "148ee123-834d-4187-8cdd-9adb039f2aa4", 00:09:16.219 "strip_size_kb": 64, 00:09:16.219 "state": "online", 00:09:16.219 "raid_level": "raid0", 00:09:16.219 "superblock": false, 00:09:16.219 "num_base_bdevs": 2, 00:09:16.219 "num_base_bdevs_discovered": 2, 00:09:16.219 "num_base_bdevs_operational": 2, 00:09:16.219 "base_bdevs_list": [ 00:09:16.219 { 00:09:16.219 "name": "BaseBdev1", 00:09:16.219 "uuid": "2fc59feb-ff3f-4d62-8a6a-d265ce6f77cc", 00:09:16.219 "is_configured": true, 00:09:16.219 "data_offset": 0, 00:09:16.219 "data_size": 65536 00:09:16.219 }, 00:09:16.219 { 00:09:16.219 "name": "BaseBdev2", 00:09:16.219 "uuid": "13009cd4-e3d7-4a8d-a39f-3df191c29b81", 00:09:16.219 "is_configured": true, 00:09:16.219 "data_offset": 0, 00:09:16.219 "data_size": 65536 00:09:16.219 } 00:09:16.219 ] 00:09:16.219 }' 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.219 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.478 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.478 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.478 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.478 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.478 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.478 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.478 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.478 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.478 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.478 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.478 [2024-12-05 20:02:17.822421] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.478 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.478 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.478 "name": "Existed_Raid", 00:09:16.478 "aliases": [ 00:09:16.478 "148ee123-834d-4187-8cdd-9adb039f2aa4" 00:09:16.478 ], 00:09:16.478 "product_name": "Raid Volume", 00:09:16.478 "block_size": 512, 00:09:16.478 "num_blocks": 131072, 00:09:16.478 "uuid": "148ee123-834d-4187-8cdd-9adb039f2aa4", 00:09:16.478 "assigned_rate_limits": { 00:09:16.478 "rw_ios_per_sec": 0, 00:09:16.478 "rw_mbytes_per_sec": 0, 00:09:16.478 "r_mbytes_per_sec": 0, 00:09:16.478 "w_mbytes_per_sec": 0 00:09:16.478 }, 00:09:16.478 "claimed": false, 00:09:16.478 "zoned": false, 00:09:16.478 "supported_io_types": { 00:09:16.478 "read": true, 00:09:16.478 "write": true, 00:09:16.478 "unmap": true, 00:09:16.478 "flush": true, 00:09:16.478 "reset": true, 00:09:16.478 "nvme_admin": false, 00:09:16.478 "nvme_io": false, 00:09:16.478 "nvme_io_md": false, 00:09:16.478 "write_zeroes": true, 00:09:16.478 "zcopy": false, 00:09:16.478 "get_zone_info": false, 00:09:16.478 "zone_management": false, 00:09:16.478 "zone_append": false, 00:09:16.478 "compare": false, 00:09:16.478 "compare_and_write": false, 00:09:16.478 "abort": false, 00:09:16.478 "seek_hole": false, 00:09:16.478 "seek_data": false, 00:09:16.478 "copy": false, 00:09:16.478 "nvme_iov_md": false 00:09:16.478 }, 00:09:16.478 "memory_domains": [ 00:09:16.478 { 00:09:16.478 "dma_device_id": "system", 00:09:16.478 "dma_device_type": 1 00:09:16.478 }, 00:09:16.478 { 00:09:16.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.478 "dma_device_type": 2 00:09:16.478 }, 00:09:16.478 { 00:09:16.478 "dma_device_id": "system", 00:09:16.478 "dma_device_type": 1 00:09:16.478 }, 00:09:16.478 { 00:09:16.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.478 "dma_device_type": 2 00:09:16.478 } 00:09:16.478 ], 00:09:16.478 "driver_specific": { 00:09:16.478 "raid": { 00:09:16.478 "uuid": "148ee123-834d-4187-8cdd-9adb039f2aa4", 00:09:16.478 "strip_size_kb": 64, 00:09:16.478 "state": "online", 00:09:16.478 "raid_level": "raid0", 00:09:16.478 "superblock": false, 00:09:16.478 "num_base_bdevs": 2, 00:09:16.478 "num_base_bdevs_discovered": 2, 00:09:16.478 "num_base_bdevs_operational": 2, 00:09:16.478 "base_bdevs_list": [ 00:09:16.478 { 00:09:16.478 "name": "BaseBdev1", 00:09:16.478 "uuid": "2fc59feb-ff3f-4d62-8a6a-d265ce6f77cc", 00:09:16.478 "is_configured": true, 00:09:16.478 "data_offset": 0, 00:09:16.478 "data_size": 65536 00:09:16.478 }, 00:09:16.478 { 00:09:16.478 "name": "BaseBdev2", 00:09:16.478 "uuid": "13009cd4-e3d7-4a8d-a39f-3df191c29b81", 00:09:16.478 "is_configured": true, 00:09:16.478 "data_offset": 0, 00:09:16.478 "data_size": 65536 00:09:16.478 } 00:09:16.478 ] 00:09:16.478 } 00:09:16.478 } 00:09:16.478 }' 00:09:16.478 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.479 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:16.479 BaseBdev2' 00:09:16.479 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.738 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.738 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.738 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:16.738 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.738 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.738 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.738 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.738 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.738 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.738 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.738 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.738 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.738 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.738 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.738 [2024-12-05 20:02:18.053827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:16.738 [2024-12-05 20:02:18.053919] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.738 [2024-12-05 20:02:18.053998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.738 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.997 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.997 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.997 "name": "Existed_Raid", 00:09:16.997 "uuid": "148ee123-834d-4187-8cdd-9adb039f2aa4", 00:09:16.997 "strip_size_kb": 64, 00:09:16.997 "state": "offline", 00:09:16.997 "raid_level": "raid0", 00:09:16.997 "superblock": false, 00:09:16.997 "num_base_bdevs": 2, 00:09:16.997 "num_base_bdevs_discovered": 1, 00:09:16.997 "num_base_bdevs_operational": 1, 00:09:16.997 "base_bdevs_list": [ 00:09:16.997 { 00:09:16.997 "name": null, 00:09:16.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.997 "is_configured": false, 00:09:16.997 "data_offset": 0, 00:09:16.997 "data_size": 65536 00:09:16.997 }, 00:09:16.997 { 00:09:16.997 "name": "BaseBdev2", 00:09:16.997 "uuid": "13009cd4-e3d7-4a8d-a39f-3df191c29b81", 00:09:16.997 "is_configured": true, 00:09:16.997 "data_offset": 0, 00:09:16.997 "data_size": 65536 00:09:16.997 } 00:09:16.997 ] 00:09:16.997 }' 00:09:16.997 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.997 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.256 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:17.256 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.256 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.256 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.256 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.256 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.256 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.256 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.256 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.256 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:17.256 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.256 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.256 [2024-12-05 20:02:18.619217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:17.256 [2024-12-05 20:02:18.619325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60826 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60826 ']' 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60826 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60826 00:09:17.515 killing process with pid 60826 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60826' 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60826 00:09:17.515 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60826 00:09:17.515 [2024-12-05 20:02:18.800589] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.515 [2024-12-05 20:02:18.816955] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:18.894 ************************************ 00:09:18.894 END TEST raid_state_function_test 00:09:18.894 ************************************ 00:09:18.894 20:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:18.894 00:09:18.894 real 0m4.940s 00:09:18.894 user 0m7.083s 00:09:18.894 sys 0m0.776s 00:09:18.894 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.894 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.894 20:02:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:09:18.894 20:02:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:18.894 20:02:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.894 20:02:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:18.894 ************************************ 00:09:18.894 START TEST raid_state_function_test_sb 00:09:18.894 ************************************ 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.894 Process raid pid: 61079 00:09:18.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61079 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61079' 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61079 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61079 ']' 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.894 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:18.894 [2024-12-05 20:02:20.093825] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:18.894 [2024-12-05 20:02:20.093953] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.894 [2024-12-05 20:02:20.268042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.154 [2024-12-05 20:02:20.385681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.413 [2024-12-05 20:02:20.593332] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.413 [2024-12-05 20:02:20.593469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.672 [2024-12-05 20:02:20.981090] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.672 [2024-12-05 20:02:20.981198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.672 [2024-12-05 20:02:20.981233] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.672 [2024-12-05 20:02:20.981260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.672 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.672 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.672 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.672 "name": "Existed_Raid", 00:09:19.672 "uuid": "12aa1d65-a79e-4974-ba09-81e08201553a", 00:09:19.672 "strip_size_kb": 64, 00:09:19.672 "state": "configuring", 00:09:19.672 "raid_level": "raid0", 00:09:19.672 "superblock": true, 00:09:19.672 "num_base_bdevs": 2, 00:09:19.672 "num_base_bdevs_discovered": 0, 00:09:19.672 "num_base_bdevs_operational": 2, 00:09:19.672 "base_bdevs_list": [ 00:09:19.672 { 00:09:19.672 "name": "BaseBdev1", 00:09:19.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.672 "is_configured": false, 00:09:19.672 "data_offset": 0, 00:09:19.672 "data_size": 0 00:09:19.673 }, 00:09:19.673 { 00:09:19.673 "name": "BaseBdev2", 00:09:19.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.673 "is_configured": false, 00:09:19.673 "data_offset": 0, 00:09:19.673 "data_size": 0 00:09:19.673 } 00:09:19.673 ] 00:09:19.673 }' 00:09:19.673 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.673 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.241 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:20.241 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.241 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.241 [2024-12-05 20:02:21.432237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.241 [2024-12-05 20:02:21.432272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:20.241 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.241 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:20.241 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.241 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.241 [2024-12-05 20:02:21.440205] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:20.241 [2024-12-05 20:02:21.440244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:20.241 [2024-12-05 20:02:21.440253] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.241 [2024-12-05 20:02:21.440265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.241 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.242 BaseBdev1 00:09:20.242 [2024-12-05 20:02:21.482118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.242 [ 00:09:20.242 { 00:09:20.242 "name": "BaseBdev1", 00:09:20.242 "aliases": [ 00:09:20.242 "0e6c4e9c-bd1b-4d5f-aecb-6d4ca8d38213" 00:09:20.242 ], 00:09:20.242 "product_name": "Malloc disk", 00:09:20.242 "block_size": 512, 00:09:20.242 "num_blocks": 65536, 00:09:20.242 "uuid": "0e6c4e9c-bd1b-4d5f-aecb-6d4ca8d38213", 00:09:20.242 "assigned_rate_limits": { 00:09:20.242 "rw_ios_per_sec": 0, 00:09:20.242 "rw_mbytes_per_sec": 0, 00:09:20.242 "r_mbytes_per_sec": 0, 00:09:20.242 "w_mbytes_per_sec": 0 00:09:20.242 }, 00:09:20.242 "claimed": true, 00:09:20.242 "claim_type": "exclusive_write", 00:09:20.242 "zoned": false, 00:09:20.242 "supported_io_types": { 00:09:20.242 "read": true, 00:09:20.242 "write": true, 00:09:20.242 "unmap": true, 00:09:20.242 "flush": true, 00:09:20.242 "reset": true, 00:09:20.242 "nvme_admin": false, 00:09:20.242 "nvme_io": false, 00:09:20.242 "nvme_io_md": false, 00:09:20.242 "write_zeroes": true, 00:09:20.242 "zcopy": true, 00:09:20.242 "get_zone_info": false, 00:09:20.242 "zone_management": false, 00:09:20.242 "zone_append": false, 00:09:20.242 "compare": false, 00:09:20.242 "compare_and_write": false, 00:09:20.242 "abort": true, 00:09:20.242 "seek_hole": false, 00:09:20.242 "seek_data": false, 00:09:20.242 "copy": true, 00:09:20.242 "nvme_iov_md": false 00:09:20.242 }, 00:09:20.242 "memory_domains": [ 00:09:20.242 { 00:09:20.242 "dma_device_id": "system", 00:09:20.242 "dma_device_type": 1 00:09:20.242 }, 00:09:20.242 { 00:09:20.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.242 "dma_device_type": 2 00:09:20.242 } 00:09:20.242 ], 00:09:20.242 "driver_specific": {} 00:09:20.242 } 00:09:20.242 ] 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.242 "name": "Existed_Raid", 00:09:20.242 "uuid": "43098ddc-dca0-4357-b757-a5dd1071a290", 00:09:20.242 "strip_size_kb": 64, 00:09:20.242 "state": "configuring", 00:09:20.242 "raid_level": "raid0", 00:09:20.242 "superblock": true, 00:09:20.242 "num_base_bdevs": 2, 00:09:20.242 "num_base_bdevs_discovered": 1, 00:09:20.242 "num_base_bdevs_operational": 2, 00:09:20.242 "base_bdevs_list": [ 00:09:20.242 { 00:09:20.242 "name": "BaseBdev1", 00:09:20.242 "uuid": "0e6c4e9c-bd1b-4d5f-aecb-6d4ca8d38213", 00:09:20.242 "is_configured": true, 00:09:20.242 "data_offset": 2048, 00:09:20.242 "data_size": 63488 00:09:20.242 }, 00:09:20.242 { 00:09:20.242 "name": "BaseBdev2", 00:09:20.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.242 "is_configured": false, 00:09:20.242 "data_offset": 0, 00:09:20.242 "data_size": 0 00:09:20.242 } 00:09:20.242 ] 00:09:20.242 }' 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.242 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.502 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:20.502 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.502 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.502 [2024-12-05 20:02:21.917444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.502 [2024-12-05 20:02:21.917542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:20.502 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.503 [2024-12-05 20:02:21.925459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.503 [2024-12-05 20:02:21.927362] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.503 [2024-12-05 20:02:21.927439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.503 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.761 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.761 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.761 "name": "Existed_Raid", 00:09:20.761 "uuid": "ef1c399b-6556-4f37-a841-8d263aa01b1e", 00:09:20.761 "strip_size_kb": 64, 00:09:20.761 "state": "configuring", 00:09:20.761 "raid_level": "raid0", 00:09:20.761 "superblock": true, 00:09:20.761 "num_base_bdevs": 2, 00:09:20.762 "num_base_bdevs_discovered": 1, 00:09:20.762 "num_base_bdevs_operational": 2, 00:09:20.762 "base_bdevs_list": [ 00:09:20.762 { 00:09:20.762 "name": "BaseBdev1", 00:09:20.762 "uuid": "0e6c4e9c-bd1b-4d5f-aecb-6d4ca8d38213", 00:09:20.762 "is_configured": true, 00:09:20.762 "data_offset": 2048, 00:09:20.762 "data_size": 63488 00:09:20.762 }, 00:09:20.762 { 00:09:20.762 "name": "BaseBdev2", 00:09:20.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.762 "is_configured": false, 00:09:20.762 "data_offset": 0, 00:09:20.762 "data_size": 0 00:09:20.762 } 00:09:20.762 ] 00:09:20.762 }' 00:09:20.762 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.762 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.021 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:21.021 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.021 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.021 [2024-12-05 20:02:22.407278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.021 [2024-12-05 20:02:22.407659] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:21.021 [2024-12-05 20:02:22.407713] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:21.022 [2024-12-05 20:02:22.408006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:21.022 BaseBdev2 00:09:21.022 [2024-12-05 20:02:22.408214] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:21.022 [2024-12-05 20:02:22.408232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:21.022 [2024-12-05 20:02:22.408375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.022 [ 00:09:21.022 { 00:09:21.022 "name": "BaseBdev2", 00:09:21.022 "aliases": [ 00:09:21.022 "bbab9355-9b58-4760-bd50-8e80d8c9b526" 00:09:21.022 ], 00:09:21.022 "product_name": "Malloc disk", 00:09:21.022 "block_size": 512, 00:09:21.022 "num_blocks": 65536, 00:09:21.022 "uuid": "bbab9355-9b58-4760-bd50-8e80d8c9b526", 00:09:21.022 "assigned_rate_limits": { 00:09:21.022 "rw_ios_per_sec": 0, 00:09:21.022 "rw_mbytes_per_sec": 0, 00:09:21.022 "r_mbytes_per_sec": 0, 00:09:21.022 "w_mbytes_per_sec": 0 00:09:21.022 }, 00:09:21.022 "claimed": true, 00:09:21.022 "claim_type": "exclusive_write", 00:09:21.022 "zoned": false, 00:09:21.022 "supported_io_types": { 00:09:21.022 "read": true, 00:09:21.022 "write": true, 00:09:21.022 "unmap": true, 00:09:21.022 "flush": true, 00:09:21.022 "reset": true, 00:09:21.022 "nvme_admin": false, 00:09:21.022 "nvme_io": false, 00:09:21.022 "nvme_io_md": false, 00:09:21.022 "write_zeroes": true, 00:09:21.022 "zcopy": true, 00:09:21.022 "get_zone_info": false, 00:09:21.022 "zone_management": false, 00:09:21.022 "zone_append": false, 00:09:21.022 "compare": false, 00:09:21.022 "compare_and_write": false, 00:09:21.022 "abort": true, 00:09:21.022 "seek_hole": false, 00:09:21.022 "seek_data": false, 00:09:21.022 "copy": true, 00:09:21.022 "nvme_iov_md": false 00:09:21.022 }, 00:09:21.022 "memory_domains": [ 00:09:21.022 { 00:09:21.022 "dma_device_id": "system", 00:09:21.022 "dma_device_type": 1 00:09:21.022 }, 00:09:21.022 { 00:09:21.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.022 "dma_device_type": 2 00:09:21.022 } 00:09:21.022 ], 00:09:21.022 "driver_specific": {} 00:09:21.022 } 00:09:21.022 ] 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.022 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.335 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.335 "name": "Existed_Raid", 00:09:21.335 "uuid": "ef1c399b-6556-4f37-a841-8d263aa01b1e", 00:09:21.335 "strip_size_kb": 64, 00:09:21.335 "state": "online", 00:09:21.335 "raid_level": "raid0", 00:09:21.335 "superblock": true, 00:09:21.335 "num_base_bdevs": 2, 00:09:21.335 "num_base_bdevs_discovered": 2, 00:09:21.335 "num_base_bdevs_operational": 2, 00:09:21.335 "base_bdevs_list": [ 00:09:21.335 { 00:09:21.335 "name": "BaseBdev1", 00:09:21.335 "uuid": "0e6c4e9c-bd1b-4d5f-aecb-6d4ca8d38213", 00:09:21.335 "is_configured": true, 00:09:21.335 "data_offset": 2048, 00:09:21.335 "data_size": 63488 00:09:21.335 }, 00:09:21.335 { 00:09:21.335 "name": "BaseBdev2", 00:09:21.335 "uuid": "bbab9355-9b58-4760-bd50-8e80d8c9b526", 00:09:21.335 "is_configured": true, 00:09:21.335 "data_offset": 2048, 00:09:21.335 "data_size": 63488 00:09:21.335 } 00:09:21.335 ] 00:09:21.335 }' 00:09:21.335 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.335 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.604 [2024-12-05 20:02:22.854866] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.604 "name": "Existed_Raid", 00:09:21.604 "aliases": [ 00:09:21.604 "ef1c399b-6556-4f37-a841-8d263aa01b1e" 00:09:21.604 ], 00:09:21.604 "product_name": "Raid Volume", 00:09:21.604 "block_size": 512, 00:09:21.604 "num_blocks": 126976, 00:09:21.604 "uuid": "ef1c399b-6556-4f37-a841-8d263aa01b1e", 00:09:21.604 "assigned_rate_limits": { 00:09:21.604 "rw_ios_per_sec": 0, 00:09:21.604 "rw_mbytes_per_sec": 0, 00:09:21.604 "r_mbytes_per_sec": 0, 00:09:21.604 "w_mbytes_per_sec": 0 00:09:21.604 }, 00:09:21.604 "claimed": false, 00:09:21.604 "zoned": false, 00:09:21.604 "supported_io_types": { 00:09:21.604 "read": true, 00:09:21.604 "write": true, 00:09:21.604 "unmap": true, 00:09:21.604 "flush": true, 00:09:21.604 "reset": true, 00:09:21.604 "nvme_admin": false, 00:09:21.604 "nvme_io": false, 00:09:21.604 "nvme_io_md": false, 00:09:21.604 "write_zeroes": true, 00:09:21.604 "zcopy": false, 00:09:21.604 "get_zone_info": false, 00:09:21.604 "zone_management": false, 00:09:21.604 "zone_append": false, 00:09:21.604 "compare": false, 00:09:21.604 "compare_and_write": false, 00:09:21.604 "abort": false, 00:09:21.604 "seek_hole": false, 00:09:21.604 "seek_data": false, 00:09:21.604 "copy": false, 00:09:21.604 "nvme_iov_md": false 00:09:21.604 }, 00:09:21.604 "memory_domains": [ 00:09:21.604 { 00:09:21.604 "dma_device_id": "system", 00:09:21.604 "dma_device_type": 1 00:09:21.604 }, 00:09:21.604 { 00:09:21.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.604 "dma_device_type": 2 00:09:21.604 }, 00:09:21.604 { 00:09:21.604 "dma_device_id": "system", 00:09:21.604 "dma_device_type": 1 00:09:21.604 }, 00:09:21.604 { 00:09:21.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.604 "dma_device_type": 2 00:09:21.604 } 00:09:21.604 ], 00:09:21.604 "driver_specific": { 00:09:21.604 "raid": { 00:09:21.604 "uuid": "ef1c399b-6556-4f37-a841-8d263aa01b1e", 00:09:21.604 "strip_size_kb": 64, 00:09:21.604 "state": "online", 00:09:21.604 "raid_level": "raid0", 00:09:21.604 "superblock": true, 00:09:21.604 "num_base_bdevs": 2, 00:09:21.604 "num_base_bdevs_discovered": 2, 00:09:21.604 "num_base_bdevs_operational": 2, 00:09:21.604 "base_bdevs_list": [ 00:09:21.604 { 00:09:21.604 "name": "BaseBdev1", 00:09:21.604 "uuid": "0e6c4e9c-bd1b-4d5f-aecb-6d4ca8d38213", 00:09:21.604 "is_configured": true, 00:09:21.604 "data_offset": 2048, 00:09:21.604 "data_size": 63488 00:09:21.604 }, 00:09:21.604 { 00:09:21.604 "name": "BaseBdev2", 00:09:21.604 "uuid": "bbab9355-9b58-4760-bd50-8e80d8c9b526", 00:09:21.604 "is_configured": true, 00:09:21.604 "data_offset": 2048, 00:09:21.604 "data_size": 63488 00:09:21.604 } 00:09:21.604 ] 00:09:21.604 } 00:09:21.604 } 00:09:21.604 }' 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:21.604 BaseBdev2' 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.604 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.604 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.604 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.604 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.604 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.604 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:21.604 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.604 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.864 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.864 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.864 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.864 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:21.864 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.864 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.864 [2024-12-05 20:02:23.082244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:21.864 [2024-12-05 20:02:23.082277] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.864 [2024-12-05 20:02:23.082329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.864 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.864 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:21.864 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:21.864 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.865 "name": "Existed_Raid", 00:09:21.865 "uuid": "ef1c399b-6556-4f37-a841-8d263aa01b1e", 00:09:21.865 "strip_size_kb": 64, 00:09:21.865 "state": "offline", 00:09:21.865 "raid_level": "raid0", 00:09:21.865 "superblock": true, 00:09:21.865 "num_base_bdevs": 2, 00:09:21.865 "num_base_bdevs_discovered": 1, 00:09:21.865 "num_base_bdevs_operational": 1, 00:09:21.865 "base_bdevs_list": [ 00:09:21.865 { 00:09:21.865 "name": null, 00:09:21.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.865 "is_configured": false, 00:09:21.865 "data_offset": 0, 00:09:21.865 "data_size": 63488 00:09:21.865 }, 00:09:21.865 { 00:09:21.865 "name": "BaseBdev2", 00:09:21.865 "uuid": "bbab9355-9b58-4760-bd50-8e80d8c9b526", 00:09:21.865 "is_configured": true, 00:09:21.865 "data_offset": 2048, 00:09:21.865 "data_size": 63488 00:09:21.865 } 00:09:21.865 ] 00:09:21.865 }' 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.865 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.435 [2024-12-05 20:02:23.661013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:22.435 [2024-12-05 20:02:23.661111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61079 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61079 ']' 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61079 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61079 00:09:22.435 killing process with pid 61079 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61079' 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61079 00:09:22.435 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61079 00:09:22.435 [2024-12-05 20:02:23.842851] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.435 [2024-12-05 20:02:23.859680] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.834 20:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:23.834 00:09:23.834 real 0m4.973s 00:09:23.834 user 0m7.220s 00:09:23.834 sys 0m0.757s 00:09:23.834 ************************************ 00:09:23.834 END TEST raid_state_function_test_sb 00:09:23.834 ************************************ 00:09:23.834 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.834 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.834 20:02:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:09:23.834 20:02:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:23.834 20:02:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.834 20:02:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.834 ************************************ 00:09:23.834 START TEST raid_superblock_test 00:09:23.834 ************************************ 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:23.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.834 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61330 00:09:23.835 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61330 00:09:23.835 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61330 ']' 00:09:23.835 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.835 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.835 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.835 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.835 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.835 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:23.835 [2024-12-05 20:02:25.131701] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:23.835 [2024-12-05 20:02:25.131820] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61330 ] 00:09:24.103 [2024-12-05 20:02:25.288926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.103 [2024-12-05 20:02:25.399868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.362 [2024-12-05 20:02:25.596703] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.362 [2024-12-05 20:02:25.596801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.621 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.621 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:24.621 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:24.621 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.621 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:24.621 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:24.621 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:24.621 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.621 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.621 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.621 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:24.621 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.621 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.621 malloc1 00:09:24.621 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.622 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:24.622 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.622 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.622 [2024-12-05 20:02:26.003959] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:24.622 [2024-12-05 20:02:26.004069] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.622 [2024-12-05 20:02:26.004102] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:24.622 [2024-12-05 20:02:26.004112] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.622 [2024-12-05 20:02:26.006125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.622 [2024-12-05 20:02:26.006161] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:24.622 pt1 00:09:24.622 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.622 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:24.622 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.622 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:24.622 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:24.622 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:24.622 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.622 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.622 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.622 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:24.622 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.622 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.622 malloc2 00:09:24.622 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.622 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:24.622 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.622 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.622 [2024-12-05 20:02:26.056358] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:24.622 [2024-12-05 20:02:26.056451] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.622 [2024-12-05 20:02:26.056494] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:24.622 [2024-12-05 20:02:26.056523] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.881 [2024-12-05 20:02:26.058651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.881 [2024-12-05 20:02:26.058716] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:24.881 pt2 00:09:24.881 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.881 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:24.881 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.881 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:24.881 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.881 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.881 [2024-12-05 20:02:26.068395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:24.881 [2024-12-05 20:02:26.070272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:24.881 [2024-12-05 20:02:26.070460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:24.881 [2024-12-05 20:02:26.070507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:24.881 [2024-12-05 20:02:26.070779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:24.881 [2024-12-05 20:02:26.070986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:24.881 [2024-12-05 20:02:26.071051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:24.881 [2024-12-05 20:02:26.071254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.881 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.881 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:24.881 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.882 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.882 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.882 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.882 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.882 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.882 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.882 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.882 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.882 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.882 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.882 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.882 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.882 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.882 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.882 "name": "raid_bdev1", 00:09:24.882 "uuid": "6769bb94-7758-44d7-90d7-3196a66ca617", 00:09:24.882 "strip_size_kb": 64, 00:09:24.882 "state": "online", 00:09:24.882 "raid_level": "raid0", 00:09:24.882 "superblock": true, 00:09:24.882 "num_base_bdevs": 2, 00:09:24.882 "num_base_bdevs_discovered": 2, 00:09:24.882 "num_base_bdevs_operational": 2, 00:09:24.882 "base_bdevs_list": [ 00:09:24.882 { 00:09:24.882 "name": "pt1", 00:09:24.882 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.882 "is_configured": true, 00:09:24.882 "data_offset": 2048, 00:09:24.882 "data_size": 63488 00:09:24.882 }, 00:09:24.882 { 00:09:24.882 "name": "pt2", 00:09:24.882 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.882 "is_configured": true, 00:09:24.882 "data_offset": 2048, 00:09:24.882 "data_size": 63488 00:09:24.882 } 00:09:24.882 ] 00:09:24.882 }' 00:09:24.882 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.882 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.142 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:25.142 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:25.142 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.142 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.142 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.142 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.142 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.142 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.142 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.142 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.142 [2024-12-05 20:02:26.523861] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.142 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.142 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.142 "name": "raid_bdev1", 00:09:25.142 "aliases": [ 00:09:25.142 "6769bb94-7758-44d7-90d7-3196a66ca617" 00:09:25.142 ], 00:09:25.142 "product_name": "Raid Volume", 00:09:25.142 "block_size": 512, 00:09:25.142 "num_blocks": 126976, 00:09:25.142 "uuid": "6769bb94-7758-44d7-90d7-3196a66ca617", 00:09:25.142 "assigned_rate_limits": { 00:09:25.142 "rw_ios_per_sec": 0, 00:09:25.142 "rw_mbytes_per_sec": 0, 00:09:25.142 "r_mbytes_per_sec": 0, 00:09:25.142 "w_mbytes_per_sec": 0 00:09:25.142 }, 00:09:25.142 "claimed": false, 00:09:25.142 "zoned": false, 00:09:25.142 "supported_io_types": { 00:09:25.142 "read": true, 00:09:25.142 "write": true, 00:09:25.142 "unmap": true, 00:09:25.142 "flush": true, 00:09:25.142 "reset": true, 00:09:25.142 "nvme_admin": false, 00:09:25.142 "nvme_io": false, 00:09:25.142 "nvme_io_md": false, 00:09:25.142 "write_zeroes": true, 00:09:25.142 "zcopy": false, 00:09:25.142 "get_zone_info": false, 00:09:25.142 "zone_management": false, 00:09:25.142 "zone_append": false, 00:09:25.142 "compare": false, 00:09:25.142 "compare_and_write": false, 00:09:25.142 "abort": false, 00:09:25.142 "seek_hole": false, 00:09:25.142 "seek_data": false, 00:09:25.142 "copy": false, 00:09:25.142 "nvme_iov_md": false 00:09:25.142 }, 00:09:25.142 "memory_domains": [ 00:09:25.142 { 00:09:25.142 "dma_device_id": "system", 00:09:25.142 "dma_device_type": 1 00:09:25.142 }, 00:09:25.142 { 00:09:25.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.142 "dma_device_type": 2 00:09:25.142 }, 00:09:25.142 { 00:09:25.142 "dma_device_id": "system", 00:09:25.142 "dma_device_type": 1 00:09:25.142 }, 00:09:25.142 { 00:09:25.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.142 "dma_device_type": 2 00:09:25.142 } 00:09:25.142 ], 00:09:25.142 "driver_specific": { 00:09:25.142 "raid": { 00:09:25.142 "uuid": "6769bb94-7758-44d7-90d7-3196a66ca617", 00:09:25.142 "strip_size_kb": 64, 00:09:25.142 "state": "online", 00:09:25.142 "raid_level": "raid0", 00:09:25.142 "superblock": true, 00:09:25.142 "num_base_bdevs": 2, 00:09:25.142 "num_base_bdevs_discovered": 2, 00:09:25.142 "num_base_bdevs_operational": 2, 00:09:25.142 "base_bdevs_list": [ 00:09:25.142 { 00:09:25.142 "name": "pt1", 00:09:25.142 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.142 "is_configured": true, 00:09:25.142 "data_offset": 2048, 00:09:25.142 "data_size": 63488 00:09:25.142 }, 00:09:25.142 { 00:09:25.142 "name": "pt2", 00:09:25.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.142 "is_configured": true, 00:09:25.142 "data_offset": 2048, 00:09:25.142 "data_size": 63488 00:09:25.142 } 00:09:25.142 ] 00:09:25.142 } 00:09:25.142 } 00:09:25.142 }' 00:09:25.142 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:25.402 pt2' 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.402 [2024-12-05 20:02:26.711515] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6769bb94-7758-44d7-90d7-3196a66ca617 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6769bb94-7758-44d7-90d7-3196a66ca617 ']' 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.402 [2024-12-05 20:02:26.759146] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.402 [2024-12-05 20:02:26.759207] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.402 [2024-12-05 20:02:26.759302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.402 [2024-12-05 20:02:26.759383] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.402 [2024-12-05 20:02:26.759419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.402 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.663 [2024-12-05 20:02:26.898971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:25.663 [2024-12-05 20:02:26.900998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:25.663 [2024-12-05 20:02:26.901120] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:25.663 [2024-12-05 20:02:26.901180] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:25.663 [2024-12-05 20:02:26.901199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.663 [2024-12-05 20:02:26.901213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:25.663 request: 00:09:25.663 { 00:09:25.663 "name": "raid_bdev1", 00:09:25.663 "raid_level": "raid0", 00:09:25.663 "base_bdevs": [ 00:09:25.663 "malloc1", 00:09:25.663 "malloc2" 00:09:25.663 ], 00:09:25.663 "strip_size_kb": 64, 00:09:25.663 "superblock": false, 00:09:25.663 "method": "bdev_raid_create", 00:09:25.663 "req_id": 1 00:09:25.663 } 00:09:25.663 Got JSON-RPC error response 00:09:25.663 response: 00:09:25.663 { 00:09:25.663 "code": -17, 00:09:25.663 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:25.663 } 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.663 [2024-12-05 20:02:26.962860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:25.663 [2024-12-05 20:02:26.962934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.663 [2024-12-05 20:02:26.962955] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:25.663 [2024-12-05 20:02:26.962965] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.663 [2024-12-05 20:02:26.965277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.663 [2024-12-05 20:02:26.965329] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:25.663 [2024-12-05 20:02:26.965430] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:25.663 [2024-12-05 20:02:26.965482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:25.663 pt1 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.663 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.663 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.663 "name": "raid_bdev1", 00:09:25.663 "uuid": "6769bb94-7758-44d7-90d7-3196a66ca617", 00:09:25.663 "strip_size_kb": 64, 00:09:25.663 "state": "configuring", 00:09:25.663 "raid_level": "raid0", 00:09:25.663 "superblock": true, 00:09:25.663 "num_base_bdevs": 2, 00:09:25.663 "num_base_bdevs_discovered": 1, 00:09:25.663 "num_base_bdevs_operational": 2, 00:09:25.663 "base_bdevs_list": [ 00:09:25.663 { 00:09:25.663 "name": "pt1", 00:09:25.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.663 "is_configured": true, 00:09:25.663 "data_offset": 2048, 00:09:25.663 "data_size": 63488 00:09:25.663 }, 00:09:25.663 { 00:09:25.663 "name": null, 00:09:25.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.663 "is_configured": false, 00:09:25.663 "data_offset": 2048, 00:09:25.663 "data_size": 63488 00:09:25.663 } 00:09:25.663 ] 00:09:25.663 }' 00:09:25.663 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.663 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.249 [2024-12-05 20:02:27.382179] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:26.249 [2024-12-05 20:02:27.382314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.249 [2024-12-05 20:02:27.382356] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:26.249 [2024-12-05 20:02:27.382393] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.249 [2024-12-05 20:02:27.382868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.249 [2024-12-05 20:02:27.382942] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:26.249 [2024-12-05 20:02:27.383055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:26.249 [2024-12-05 20:02:27.383113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:26.249 [2024-12-05 20:02:27.383270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:26.249 [2024-12-05 20:02:27.383311] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:26.249 [2024-12-05 20:02:27.383575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:26.249 [2024-12-05 20:02:27.383779] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:26.249 [2024-12-05 20:02:27.383819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:26.249 [2024-12-05 20:02:27.384013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.249 pt2 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.249 "name": "raid_bdev1", 00:09:26.249 "uuid": "6769bb94-7758-44d7-90d7-3196a66ca617", 00:09:26.249 "strip_size_kb": 64, 00:09:26.249 "state": "online", 00:09:26.249 "raid_level": "raid0", 00:09:26.249 "superblock": true, 00:09:26.249 "num_base_bdevs": 2, 00:09:26.249 "num_base_bdevs_discovered": 2, 00:09:26.249 "num_base_bdevs_operational": 2, 00:09:26.249 "base_bdevs_list": [ 00:09:26.249 { 00:09:26.249 "name": "pt1", 00:09:26.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.249 "is_configured": true, 00:09:26.249 "data_offset": 2048, 00:09:26.249 "data_size": 63488 00:09:26.249 }, 00:09:26.249 { 00:09:26.249 "name": "pt2", 00:09:26.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.249 "is_configured": true, 00:09:26.249 "data_offset": 2048, 00:09:26.249 "data_size": 63488 00:09:26.249 } 00:09:26.249 ] 00:09:26.249 }' 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.249 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.509 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:26.509 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:26.509 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:26.509 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:26.509 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:26.509 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:26.509 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:26.509 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:26.509 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.509 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.509 [2024-12-05 20:02:27.837648] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.509 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.509 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:26.509 "name": "raid_bdev1", 00:09:26.509 "aliases": [ 00:09:26.509 "6769bb94-7758-44d7-90d7-3196a66ca617" 00:09:26.509 ], 00:09:26.509 "product_name": "Raid Volume", 00:09:26.509 "block_size": 512, 00:09:26.509 "num_blocks": 126976, 00:09:26.509 "uuid": "6769bb94-7758-44d7-90d7-3196a66ca617", 00:09:26.509 "assigned_rate_limits": { 00:09:26.509 "rw_ios_per_sec": 0, 00:09:26.509 "rw_mbytes_per_sec": 0, 00:09:26.509 "r_mbytes_per_sec": 0, 00:09:26.509 "w_mbytes_per_sec": 0 00:09:26.509 }, 00:09:26.509 "claimed": false, 00:09:26.509 "zoned": false, 00:09:26.509 "supported_io_types": { 00:09:26.509 "read": true, 00:09:26.509 "write": true, 00:09:26.509 "unmap": true, 00:09:26.509 "flush": true, 00:09:26.509 "reset": true, 00:09:26.509 "nvme_admin": false, 00:09:26.509 "nvme_io": false, 00:09:26.509 "nvme_io_md": false, 00:09:26.509 "write_zeroes": true, 00:09:26.509 "zcopy": false, 00:09:26.509 "get_zone_info": false, 00:09:26.509 "zone_management": false, 00:09:26.509 "zone_append": false, 00:09:26.509 "compare": false, 00:09:26.509 "compare_and_write": false, 00:09:26.509 "abort": false, 00:09:26.509 "seek_hole": false, 00:09:26.509 "seek_data": false, 00:09:26.509 "copy": false, 00:09:26.509 "nvme_iov_md": false 00:09:26.509 }, 00:09:26.509 "memory_domains": [ 00:09:26.509 { 00:09:26.509 "dma_device_id": "system", 00:09:26.509 "dma_device_type": 1 00:09:26.509 }, 00:09:26.509 { 00:09:26.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.509 "dma_device_type": 2 00:09:26.509 }, 00:09:26.509 { 00:09:26.509 "dma_device_id": "system", 00:09:26.509 "dma_device_type": 1 00:09:26.509 }, 00:09:26.509 { 00:09:26.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.509 "dma_device_type": 2 00:09:26.509 } 00:09:26.509 ], 00:09:26.509 "driver_specific": { 00:09:26.509 "raid": { 00:09:26.509 "uuid": "6769bb94-7758-44d7-90d7-3196a66ca617", 00:09:26.509 "strip_size_kb": 64, 00:09:26.509 "state": "online", 00:09:26.509 "raid_level": "raid0", 00:09:26.509 "superblock": true, 00:09:26.509 "num_base_bdevs": 2, 00:09:26.509 "num_base_bdevs_discovered": 2, 00:09:26.509 "num_base_bdevs_operational": 2, 00:09:26.509 "base_bdevs_list": [ 00:09:26.509 { 00:09:26.509 "name": "pt1", 00:09:26.509 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.509 "is_configured": true, 00:09:26.509 "data_offset": 2048, 00:09:26.509 "data_size": 63488 00:09:26.509 }, 00:09:26.509 { 00:09:26.509 "name": "pt2", 00:09:26.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.509 "is_configured": true, 00:09:26.509 "data_offset": 2048, 00:09:26.509 "data_size": 63488 00:09:26.509 } 00:09:26.509 ] 00:09:26.509 } 00:09:26.509 } 00:09:26.509 }' 00:09:26.509 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:26.509 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:26.509 pt2' 00:09:26.509 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.769 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:26.769 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.769 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.769 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:26.769 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.769 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:26.769 [2024-12-05 20:02:28.069237] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6769bb94-7758-44d7-90d7-3196a66ca617 '!=' 6769bb94-7758-44d7-90d7-3196a66ca617 ']' 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61330 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61330 ']' 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61330 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61330 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61330' 00:09:26.769 killing process with pid 61330 00:09:26.769 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61330 00:09:26.769 [2024-12-05 20:02:28.130789] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.769 [2024-12-05 20:02:28.130967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.769 [2024-12-05 20:02:28.131051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.769 [2024-12-05 20:02:28.131103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, sta 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61330 00:09:26.769 te offline 00:09:27.029 [2024-12-05 20:02:28.333784] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.409 20:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:28.409 00:09:28.409 real 0m4.377s 00:09:28.409 user 0m6.116s 00:09:28.409 sys 0m0.734s 00:09:28.409 20:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.409 20:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.409 ************************************ 00:09:28.409 END TEST raid_superblock_test 00:09:28.409 ************************************ 00:09:28.409 20:02:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:09:28.409 20:02:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:28.409 20:02:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.409 20:02:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.409 ************************************ 00:09:28.409 START TEST raid_read_error_test 00:09:28.409 ************************************ 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zkxwcdxhYL 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61537 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61537 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61537 ']' 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.409 20:02:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.409 [2024-12-05 20:02:29.597589] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:28.409 [2024-12-05 20:02:29.597701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61537 ] 00:09:28.409 [2024-12-05 20:02:29.768668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.668 [2024-12-05 20:02:29.879173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.668 [2024-12-05 20:02:30.075765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.668 [2024-12-05 20:02:30.075804] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.236 BaseBdev1_malloc 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.236 true 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.236 [2024-12-05 20:02:30.470934] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:29.236 [2024-12-05 20:02:30.470984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.236 [2024-12-05 20:02:30.471004] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:29.236 [2024-12-05 20:02:30.471014] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.236 [2024-12-05 20:02:30.473167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.236 [2024-12-05 20:02:30.473280] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:29.236 BaseBdev1 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.236 BaseBdev2_malloc 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.236 true 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.236 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.237 [2024-12-05 20:02:30.538540] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:29.237 [2024-12-05 20:02:30.538608] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.237 [2024-12-05 20:02:30.538626] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:29.237 [2024-12-05 20:02:30.538636] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.237 [2024-12-05 20:02:30.540797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.237 [2024-12-05 20:02:30.540909] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:29.237 BaseBdev2 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.237 [2024-12-05 20:02:30.550622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.237 [2024-12-05 20:02:30.552589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.237 [2024-12-05 20:02:30.552824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:29.237 [2024-12-05 20:02:30.552842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:29.237 [2024-12-05 20:02:30.553132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:29.237 [2024-12-05 20:02:30.553325] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:29.237 [2024-12-05 20:02:30.553338] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:29.237 [2024-12-05 20:02:30.553523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.237 "name": "raid_bdev1", 00:09:29.237 "uuid": "620ba891-4a81-4a68-97dd-c41ad487ed41", 00:09:29.237 "strip_size_kb": 64, 00:09:29.237 "state": "online", 00:09:29.237 "raid_level": "raid0", 00:09:29.237 "superblock": true, 00:09:29.237 "num_base_bdevs": 2, 00:09:29.237 "num_base_bdevs_discovered": 2, 00:09:29.237 "num_base_bdevs_operational": 2, 00:09:29.237 "base_bdevs_list": [ 00:09:29.237 { 00:09:29.237 "name": "BaseBdev1", 00:09:29.237 "uuid": "97a5ceeb-8061-5cae-8f9b-30cddb8597b7", 00:09:29.237 "is_configured": true, 00:09:29.237 "data_offset": 2048, 00:09:29.237 "data_size": 63488 00:09:29.237 }, 00:09:29.237 { 00:09:29.237 "name": "BaseBdev2", 00:09:29.237 "uuid": "5f644396-b19a-5de0-8554-6fbf0d318070", 00:09:29.237 "is_configured": true, 00:09:29.237 "data_offset": 2048, 00:09:29.237 "data_size": 63488 00:09:29.237 } 00:09:29.237 ] 00:09:29.237 }' 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.237 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.806 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:29.806 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:29.806 [2024-12-05 20:02:31.142980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.746 "name": "raid_bdev1", 00:09:30.746 "uuid": "620ba891-4a81-4a68-97dd-c41ad487ed41", 00:09:30.746 "strip_size_kb": 64, 00:09:30.746 "state": "online", 00:09:30.746 "raid_level": "raid0", 00:09:30.746 "superblock": true, 00:09:30.746 "num_base_bdevs": 2, 00:09:30.746 "num_base_bdevs_discovered": 2, 00:09:30.746 "num_base_bdevs_operational": 2, 00:09:30.746 "base_bdevs_list": [ 00:09:30.746 { 00:09:30.746 "name": "BaseBdev1", 00:09:30.746 "uuid": "97a5ceeb-8061-5cae-8f9b-30cddb8597b7", 00:09:30.746 "is_configured": true, 00:09:30.746 "data_offset": 2048, 00:09:30.746 "data_size": 63488 00:09:30.746 }, 00:09:30.746 { 00:09:30.746 "name": "BaseBdev2", 00:09:30.746 "uuid": "5f644396-b19a-5de0-8554-6fbf0d318070", 00:09:30.746 "is_configured": true, 00:09:30.746 "data_offset": 2048, 00:09:30.746 "data_size": 63488 00:09:30.746 } 00:09:30.746 ] 00:09:30.746 }' 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.746 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.316 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:31.316 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.316 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.316 [2024-12-05 20:02:32.521394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:31.316 [2024-12-05 20:02:32.521517] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.316 [2024-12-05 20:02:32.524673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.316 [2024-12-05 20:02:32.524768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.316 [2024-12-05 20:02:32.524824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.316 [2024-12-05 20:02:32.524838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:31.316 { 00:09:31.316 "results": [ 00:09:31.316 { 00:09:31.316 "job": "raid_bdev1", 00:09:31.316 "core_mask": "0x1", 00:09:31.316 "workload": "randrw", 00:09:31.316 "percentage": 50, 00:09:31.316 "status": "finished", 00:09:31.316 "queue_depth": 1, 00:09:31.316 "io_size": 131072, 00:09:31.316 "runtime": 1.379512, 00:09:31.316 "iops": 16091.92236095083, 00:09:31.316 "mibps": 2011.4902951188537, 00:09:31.316 "io_failed": 1, 00:09:31.316 "io_timeout": 0, 00:09:31.316 "avg_latency_us": 86.07342350210473, 00:09:31.316 "min_latency_us": 25.041048034934498, 00:09:31.316 "max_latency_us": 1373.6803493449781 00:09:31.316 } 00:09:31.316 ], 00:09:31.316 "core_count": 1 00:09:31.316 } 00:09:31.316 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.316 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61537 00:09:31.316 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61537 ']' 00:09:31.316 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61537 00:09:31.316 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:31.316 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.316 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61537 00:09:31.316 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.316 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.316 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61537' 00:09:31.316 killing process with pid 61537 00:09:31.316 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61537 00:09:31.316 [2024-12-05 20:02:32.568791] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.317 20:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61537 00:09:31.317 [2024-12-05 20:02:32.701411] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.699 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:32.699 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zkxwcdxhYL 00:09:32.699 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:32.699 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:32.699 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:32.699 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.699 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:32.699 ************************************ 00:09:32.699 END TEST raid_read_error_test 00:09:32.699 ************************************ 00:09:32.699 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:32.699 00:09:32.700 real 0m4.405s 00:09:32.700 user 0m5.315s 00:09:32.700 sys 0m0.534s 00:09:32.700 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.700 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.700 20:02:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:09:32.700 20:02:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:32.700 20:02:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.700 20:02:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.700 ************************************ 00:09:32.700 START TEST raid_write_error_test 00:09:32.700 ************************************ 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.N4GjyCogAQ 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61683 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61683 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61683 ']' 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.700 20:02:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.700 [2024-12-05 20:02:34.085388] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:32.700 [2024-12-05 20:02:34.085665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61683 ] 00:09:32.960 [2024-12-05 20:02:34.286718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.220 [2024-12-05 20:02:34.403882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.220 [2024-12-05 20:02:34.594973] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.220 [2024-12-05 20:02:34.595130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.807 BaseBdev1_malloc 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.807 true 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.807 [2024-12-05 20:02:34.984726] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:33.807 [2024-12-05 20:02:34.984850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.807 [2024-12-05 20:02:34.984880] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:33.807 [2024-12-05 20:02:34.984912] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.807 [2024-12-05 20:02:34.987153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.807 [2024-12-05 20:02:34.987194] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:33.807 BaseBdev1 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.807 20:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.807 BaseBdev2_malloc 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.807 true 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.807 [2024-12-05 20:02:35.051678] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:33.807 [2024-12-05 20:02:35.051735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.807 [2024-12-05 20:02:35.051752] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:33.807 [2024-12-05 20:02:35.051762] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.807 [2024-12-05 20:02:35.053992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.807 [2024-12-05 20:02:35.054028] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:33.807 BaseBdev2 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.807 [2024-12-05 20:02:35.063720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.807 [2024-12-05 20:02:35.065598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.807 [2024-12-05 20:02:35.065867] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:33.807 [2024-12-05 20:02:35.065900] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:33.807 [2024-12-05 20:02:35.066154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:33.807 [2024-12-05 20:02:35.066342] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:33.807 [2024-12-05 20:02:35.066355] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:33.807 [2024-12-05 20:02:35.066526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.807 "name": "raid_bdev1", 00:09:33.807 "uuid": "da9ac208-48e4-4a60-9d40-a278e5137d40", 00:09:33.807 "strip_size_kb": 64, 00:09:33.807 "state": "online", 00:09:33.807 "raid_level": "raid0", 00:09:33.807 "superblock": true, 00:09:33.807 "num_base_bdevs": 2, 00:09:33.807 "num_base_bdevs_discovered": 2, 00:09:33.807 "num_base_bdevs_operational": 2, 00:09:33.807 "base_bdevs_list": [ 00:09:33.807 { 00:09:33.807 "name": "BaseBdev1", 00:09:33.807 "uuid": "b9e336bb-7ee9-5252-9bcb-57522aa5acac", 00:09:33.807 "is_configured": true, 00:09:33.807 "data_offset": 2048, 00:09:33.807 "data_size": 63488 00:09:33.807 }, 00:09:33.807 { 00:09:33.807 "name": "BaseBdev2", 00:09:33.807 "uuid": "91d3ead6-a3df-5d7f-84fd-cf8637218072", 00:09:33.807 "is_configured": true, 00:09:33.807 "data_offset": 2048, 00:09:33.807 "data_size": 63488 00:09:33.807 } 00:09:33.807 ] 00:09:33.807 }' 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.807 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.068 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:34.068 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:34.327 [2024-12-05 20:02:35.580320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.264 "name": "raid_bdev1", 00:09:35.264 "uuid": "da9ac208-48e4-4a60-9d40-a278e5137d40", 00:09:35.264 "strip_size_kb": 64, 00:09:35.264 "state": "online", 00:09:35.264 "raid_level": "raid0", 00:09:35.264 "superblock": true, 00:09:35.264 "num_base_bdevs": 2, 00:09:35.264 "num_base_bdevs_discovered": 2, 00:09:35.264 "num_base_bdevs_operational": 2, 00:09:35.264 "base_bdevs_list": [ 00:09:35.264 { 00:09:35.264 "name": "BaseBdev1", 00:09:35.264 "uuid": "b9e336bb-7ee9-5252-9bcb-57522aa5acac", 00:09:35.264 "is_configured": true, 00:09:35.264 "data_offset": 2048, 00:09:35.264 "data_size": 63488 00:09:35.264 }, 00:09:35.264 { 00:09:35.264 "name": "BaseBdev2", 00:09:35.264 "uuid": "91d3ead6-a3df-5d7f-84fd-cf8637218072", 00:09:35.264 "is_configured": true, 00:09:35.264 "data_offset": 2048, 00:09:35.264 "data_size": 63488 00:09:35.264 } 00:09:35.264 ] 00:09:35.264 }' 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.264 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.524 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:35.524 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.524 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.524 [2024-12-05 20:02:36.928312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:35.524 [2024-12-05 20:02:36.928435] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.524 [2024-12-05 20:02:36.931297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.524 [2024-12-05 20:02:36.931387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.524 [2024-12-05 20:02:36.931455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.524 [2024-12-05 20:02:36.931502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:35.524 { 00:09:35.524 "results": [ 00:09:35.524 { 00:09:35.524 "job": "raid_bdev1", 00:09:35.524 "core_mask": "0x1", 00:09:35.524 "workload": "randrw", 00:09:35.524 "percentage": 50, 00:09:35.524 "status": "finished", 00:09:35.524 "queue_depth": 1, 00:09:35.524 "io_size": 131072, 00:09:35.524 "runtime": 1.348955, 00:09:35.524 "iops": 14988.639354166744, 00:09:35.524 "mibps": 1873.579919270843, 00:09:35.524 "io_failed": 1, 00:09:35.524 "io_timeout": 0, 00:09:35.524 "avg_latency_us": 92.60075933292732, 00:09:35.524 "min_latency_us": 27.165065502183406, 00:09:35.524 "max_latency_us": 1509.6174672489083 00:09:35.524 } 00:09:35.524 ], 00:09:35.524 "core_count": 1 00:09:35.524 } 00:09:35.524 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.524 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61683 00:09:35.524 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61683 ']' 00:09:35.524 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61683 00:09:35.524 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:35.524 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.524 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61683 00:09:35.783 killing process with pid 61683 00:09:35.783 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.783 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.783 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61683' 00:09:35.783 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61683 00:09:35.783 [2024-12-05 20:02:36.966434] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.783 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61683 00:09:35.784 [2024-12-05 20:02:37.105105] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.164 20:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:37.164 20:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.N4GjyCogAQ 00:09:37.164 20:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:37.164 20:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:37.164 20:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:37.164 20:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:37.164 20:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:37.164 20:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:37.164 00:09:37.164 real 0m4.346s 00:09:37.164 user 0m5.169s 00:09:37.164 sys 0m0.566s 00:09:37.164 20:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.164 20:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.164 ************************************ 00:09:37.164 END TEST raid_write_error_test 00:09:37.164 ************************************ 00:09:37.164 20:02:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:37.164 20:02:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:37.164 20:02:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:37.164 20:02:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.164 20:02:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.164 ************************************ 00:09:37.164 START TEST raid_state_function_test 00:09:37.164 ************************************ 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61821 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61821' 00:09:37.164 Process raid pid: 61821 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61821 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61821 ']' 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.164 20:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.164 [2024-12-05 20:02:38.463844] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:37.164 [2024-12-05 20:02:38.464068] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.423 [2024-12-05 20:02:38.620113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.423 [2024-12-05 20:02:38.730596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.682 [2024-12-05 20:02:38.937374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.682 [2024-12-05 20:02:38.937459] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.943 [2024-12-05 20:02:39.294921] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.943 [2024-12-05 20:02:39.294981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.943 [2024-12-05 20:02:39.294993] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.943 [2024-12-05 20:02:39.295003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.943 "name": "Existed_Raid", 00:09:37.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.943 "strip_size_kb": 64, 00:09:37.943 "state": "configuring", 00:09:37.943 "raid_level": "concat", 00:09:37.943 "superblock": false, 00:09:37.943 "num_base_bdevs": 2, 00:09:37.943 "num_base_bdevs_discovered": 0, 00:09:37.943 "num_base_bdevs_operational": 2, 00:09:37.943 "base_bdevs_list": [ 00:09:37.943 { 00:09:37.943 "name": "BaseBdev1", 00:09:37.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.943 "is_configured": false, 00:09:37.943 "data_offset": 0, 00:09:37.943 "data_size": 0 00:09:37.943 }, 00:09:37.943 { 00:09:37.943 "name": "BaseBdev2", 00:09:37.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.943 "is_configured": false, 00:09:37.943 "data_offset": 0, 00:09:37.943 "data_size": 0 00:09:37.943 } 00:09:37.943 ] 00:09:37.943 }' 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.943 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.534 [2024-12-05 20:02:39.686246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.534 [2024-12-05 20:02:39.686341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.534 [2024-12-05 20:02:39.698204] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.534 [2024-12-05 20:02:39.698283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.534 [2024-12-05 20:02:39.698312] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.534 [2024-12-05 20:02:39.698336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.534 [2024-12-05 20:02:39.744686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.534 BaseBdev1 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.534 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.534 [ 00:09:38.534 { 00:09:38.534 "name": "BaseBdev1", 00:09:38.534 "aliases": [ 00:09:38.534 "7dc2bd89-f2bb-4283-b403-73e92de7d51e" 00:09:38.534 ], 00:09:38.534 "product_name": "Malloc disk", 00:09:38.534 "block_size": 512, 00:09:38.534 "num_blocks": 65536, 00:09:38.534 "uuid": "7dc2bd89-f2bb-4283-b403-73e92de7d51e", 00:09:38.534 "assigned_rate_limits": { 00:09:38.534 "rw_ios_per_sec": 0, 00:09:38.534 "rw_mbytes_per_sec": 0, 00:09:38.534 "r_mbytes_per_sec": 0, 00:09:38.535 "w_mbytes_per_sec": 0 00:09:38.535 }, 00:09:38.535 "claimed": true, 00:09:38.535 "claim_type": "exclusive_write", 00:09:38.535 "zoned": false, 00:09:38.535 "supported_io_types": { 00:09:38.535 "read": true, 00:09:38.535 "write": true, 00:09:38.535 "unmap": true, 00:09:38.535 "flush": true, 00:09:38.535 "reset": true, 00:09:38.535 "nvme_admin": false, 00:09:38.535 "nvme_io": false, 00:09:38.535 "nvme_io_md": false, 00:09:38.535 "write_zeroes": true, 00:09:38.535 "zcopy": true, 00:09:38.535 "get_zone_info": false, 00:09:38.535 "zone_management": false, 00:09:38.535 "zone_append": false, 00:09:38.535 "compare": false, 00:09:38.535 "compare_and_write": false, 00:09:38.535 "abort": true, 00:09:38.535 "seek_hole": false, 00:09:38.535 "seek_data": false, 00:09:38.535 "copy": true, 00:09:38.535 "nvme_iov_md": false 00:09:38.535 }, 00:09:38.535 "memory_domains": [ 00:09:38.535 { 00:09:38.535 "dma_device_id": "system", 00:09:38.535 "dma_device_type": 1 00:09:38.535 }, 00:09:38.535 { 00:09:38.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.535 "dma_device_type": 2 00:09:38.535 } 00:09:38.535 ], 00:09:38.535 "driver_specific": {} 00:09:38.535 } 00:09:38.535 ] 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.535 "name": "Existed_Raid", 00:09:38.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.535 "strip_size_kb": 64, 00:09:38.535 "state": "configuring", 00:09:38.535 "raid_level": "concat", 00:09:38.535 "superblock": false, 00:09:38.535 "num_base_bdevs": 2, 00:09:38.535 "num_base_bdevs_discovered": 1, 00:09:38.535 "num_base_bdevs_operational": 2, 00:09:38.535 "base_bdevs_list": [ 00:09:38.535 { 00:09:38.535 "name": "BaseBdev1", 00:09:38.535 "uuid": "7dc2bd89-f2bb-4283-b403-73e92de7d51e", 00:09:38.535 "is_configured": true, 00:09:38.535 "data_offset": 0, 00:09:38.535 "data_size": 65536 00:09:38.535 }, 00:09:38.535 { 00:09:38.535 "name": "BaseBdev2", 00:09:38.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.535 "is_configured": false, 00:09:38.535 "data_offset": 0, 00:09:38.535 "data_size": 0 00:09:38.535 } 00:09:38.535 ] 00:09:38.535 }' 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.535 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.794 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.794 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.794 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.794 [2024-12-05 20:02:40.219938] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.794 [2024-12-05 20:02:40.219990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:38.794 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.794 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:38.794 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.794 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.052 [2024-12-05 20:02:40.227967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.052 [2024-12-05 20:02:40.229823] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.052 [2024-12-05 20:02:40.229865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.052 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.052 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:39.052 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.052 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:39.052 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.052 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.052 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.052 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.052 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.052 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.052 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.052 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.052 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.052 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.052 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.053 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.053 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.053 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.053 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.053 "name": "Existed_Raid", 00:09:39.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.053 "strip_size_kb": 64, 00:09:39.053 "state": "configuring", 00:09:39.053 "raid_level": "concat", 00:09:39.053 "superblock": false, 00:09:39.053 "num_base_bdevs": 2, 00:09:39.053 "num_base_bdevs_discovered": 1, 00:09:39.053 "num_base_bdevs_operational": 2, 00:09:39.053 "base_bdevs_list": [ 00:09:39.053 { 00:09:39.053 "name": "BaseBdev1", 00:09:39.053 "uuid": "7dc2bd89-f2bb-4283-b403-73e92de7d51e", 00:09:39.053 "is_configured": true, 00:09:39.053 "data_offset": 0, 00:09:39.053 "data_size": 65536 00:09:39.053 }, 00:09:39.053 { 00:09:39.053 "name": "BaseBdev2", 00:09:39.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.053 "is_configured": false, 00:09:39.053 "data_offset": 0, 00:09:39.053 "data_size": 0 00:09:39.053 } 00:09:39.053 ] 00:09:39.053 }' 00:09:39.053 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.053 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.312 [2024-12-05 20:02:40.693441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.312 [2024-12-05 20:02:40.693596] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:39.312 [2024-12-05 20:02:40.693623] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:39.312 [2024-12-05 20:02:40.693940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:39.312 [2024-12-05 20:02:40.694167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:39.312 [2024-12-05 20:02:40.694217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:39.312 [2024-12-05 20:02:40.694545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.312 BaseBdev2 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.312 [ 00:09:39.312 { 00:09:39.312 "name": "BaseBdev2", 00:09:39.312 "aliases": [ 00:09:39.312 "9e052698-8a18-4810-a89d-9c8f3fe6b5a6" 00:09:39.312 ], 00:09:39.312 "product_name": "Malloc disk", 00:09:39.312 "block_size": 512, 00:09:39.312 "num_blocks": 65536, 00:09:39.312 "uuid": "9e052698-8a18-4810-a89d-9c8f3fe6b5a6", 00:09:39.312 "assigned_rate_limits": { 00:09:39.312 "rw_ios_per_sec": 0, 00:09:39.312 "rw_mbytes_per_sec": 0, 00:09:39.312 "r_mbytes_per_sec": 0, 00:09:39.312 "w_mbytes_per_sec": 0 00:09:39.312 }, 00:09:39.312 "claimed": true, 00:09:39.312 "claim_type": "exclusive_write", 00:09:39.312 "zoned": false, 00:09:39.312 "supported_io_types": { 00:09:39.312 "read": true, 00:09:39.312 "write": true, 00:09:39.312 "unmap": true, 00:09:39.312 "flush": true, 00:09:39.312 "reset": true, 00:09:39.312 "nvme_admin": false, 00:09:39.312 "nvme_io": false, 00:09:39.312 "nvme_io_md": false, 00:09:39.312 "write_zeroes": true, 00:09:39.312 "zcopy": true, 00:09:39.312 "get_zone_info": false, 00:09:39.312 "zone_management": false, 00:09:39.312 "zone_append": false, 00:09:39.312 "compare": false, 00:09:39.312 "compare_and_write": false, 00:09:39.312 "abort": true, 00:09:39.312 "seek_hole": false, 00:09:39.312 "seek_data": false, 00:09:39.312 "copy": true, 00:09:39.312 "nvme_iov_md": false 00:09:39.312 }, 00:09:39.312 "memory_domains": [ 00:09:39.312 { 00:09:39.312 "dma_device_id": "system", 00:09:39.312 "dma_device_type": 1 00:09:39.312 }, 00:09:39.312 { 00:09:39.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.312 "dma_device_type": 2 00:09:39.312 } 00:09:39.312 ], 00:09:39.312 "driver_specific": {} 00:09:39.312 } 00:09:39.312 ] 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.312 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.571 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.571 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.571 "name": "Existed_Raid", 00:09:39.571 "uuid": "ca82da3a-4d95-40c8-b31a-889c3d1b80de", 00:09:39.571 "strip_size_kb": 64, 00:09:39.571 "state": "online", 00:09:39.571 "raid_level": "concat", 00:09:39.571 "superblock": false, 00:09:39.571 "num_base_bdevs": 2, 00:09:39.571 "num_base_bdevs_discovered": 2, 00:09:39.571 "num_base_bdevs_operational": 2, 00:09:39.571 "base_bdevs_list": [ 00:09:39.571 { 00:09:39.571 "name": "BaseBdev1", 00:09:39.571 "uuid": "7dc2bd89-f2bb-4283-b403-73e92de7d51e", 00:09:39.571 "is_configured": true, 00:09:39.571 "data_offset": 0, 00:09:39.571 "data_size": 65536 00:09:39.571 }, 00:09:39.571 { 00:09:39.571 "name": "BaseBdev2", 00:09:39.571 "uuid": "9e052698-8a18-4810-a89d-9c8f3fe6b5a6", 00:09:39.571 "is_configured": true, 00:09:39.571 "data_offset": 0, 00:09:39.571 "data_size": 65536 00:09:39.571 } 00:09:39.571 ] 00:09:39.571 }' 00:09:39.571 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.571 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.831 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:39.831 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:39.831 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.831 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.831 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.831 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.831 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:39.831 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.831 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.831 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.831 [2024-12-05 20:02:41.192919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.831 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.831 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.831 "name": "Existed_Raid", 00:09:39.831 "aliases": [ 00:09:39.831 "ca82da3a-4d95-40c8-b31a-889c3d1b80de" 00:09:39.831 ], 00:09:39.831 "product_name": "Raid Volume", 00:09:39.831 "block_size": 512, 00:09:39.831 "num_blocks": 131072, 00:09:39.831 "uuid": "ca82da3a-4d95-40c8-b31a-889c3d1b80de", 00:09:39.831 "assigned_rate_limits": { 00:09:39.831 "rw_ios_per_sec": 0, 00:09:39.831 "rw_mbytes_per_sec": 0, 00:09:39.831 "r_mbytes_per_sec": 0, 00:09:39.831 "w_mbytes_per_sec": 0 00:09:39.831 }, 00:09:39.831 "claimed": false, 00:09:39.831 "zoned": false, 00:09:39.831 "supported_io_types": { 00:09:39.831 "read": true, 00:09:39.831 "write": true, 00:09:39.831 "unmap": true, 00:09:39.831 "flush": true, 00:09:39.831 "reset": true, 00:09:39.831 "nvme_admin": false, 00:09:39.831 "nvme_io": false, 00:09:39.831 "nvme_io_md": false, 00:09:39.831 "write_zeroes": true, 00:09:39.831 "zcopy": false, 00:09:39.831 "get_zone_info": false, 00:09:39.831 "zone_management": false, 00:09:39.831 "zone_append": false, 00:09:39.831 "compare": false, 00:09:39.831 "compare_and_write": false, 00:09:39.831 "abort": false, 00:09:39.831 "seek_hole": false, 00:09:39.831 "seek_data": false, 00:09:39.831 "copy": false, 00:09:39.831 "nvme_iov_md": false 00:09:39.831 }, 00:09:39.831 "memory_domains": [ 00:09:39.831 { 00:09:39.831 "dma_device_id": "system", 00:09:39.831 "dma_device_type": 1 00:09:39.831 }, 00:09:39.831 { 00:09:39.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.831 "dma_device_type": 2 00:09:39.831 }, 00:09:39.831 { 00:09:39.831 "dma_device_id": "system", 00:09:39.831 "dma_device_type": 1 00:09:39.831 }, 00:09:39.831 { 00:09:39.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.831 "dma_device_type": 2 00:09:39.831 } 00:09:39.831 ], 00:09:39.831 "driver_specific": { 00:09:39.831 "raid": { 00:09:39.831 "uuid": "ca82da3a-4d95-40c8-b31a-889c3d1b80de", 00:09:39.831 "strip_size_kb": 64, 00:09:39.831 "state": "online", 00:09:39.831 "raid_level": "concat", 00:09:39.831 "superblock": false, 00:09:39.831 "num_base_bdevs": 2, 00:09:39.831 "num_base_bdevs_discovered": 2, 00:09:39.831 "num_base_bdevs_operational": 2, 00:09:39.831 "base_bdevs_list": [ 00:09:39.831 { 00:09:39.831 "name": "BaseBdev1", 00:09:39.831 "uuid": "7dc2bd89-f2bb-4283-b403-73e92de7d51e", 00:09:39.831 "is_configured": true, 00:09:39.831 "data_offset": 0, 00:09:39.831 "data_size": 65536 00:09:39.831 }, 00:09:39.831 { 00:09:39.831 "name": "BaseBdev2", 00:09:39.831 "uuid": "9e052698-8a18-4810-a89d-9c8f3fe6b5a6", 00:09:39.831 "is_configured": true, 00:09:39.831 "data_offset": 0, 00:09:39.831 "data_size": 65536 00:09:39.831 } 00:09:39.831 ] 00:09:39.831 } 00:09:39.831 } 00:09:39.831 }' 00:09:39.831 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:40.091 BaseBdev2' 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.091 [2024-12-05 20:02:41.420288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:40.091 [2024-12-05 20:02:41.420366] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.091 [2024-12-05 20:02:41.420425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.091 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:40.092 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.092 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.092 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:40.092 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.092 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.092 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.092 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.092 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.092 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.092 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.092 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.351 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.351 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.351 "name": "Existed_Raid", 00:09:40.351 "uuid": "ca82da3a-4d95-40c8-b31a-889c3d1b80de", 00:09:40.351 "strip_size_kb": 64, 00:09:40.351 "state": "offline", 00:09:40.351 "raid_level": "concat", 00:09:40.351 "superblock": false, 00:09:40.351 "num_base_bdevs": 2, 00:09:40.351 "num_base_bdevs_discovered": 1, 00:09:40.351 "num_base_bdevs_operational": 1, 00:09:40.351 "base_bdevs_list": [ 00:09:40.351 { 00:09:40.351 "name": null, 00:09:40.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.351 "is_configured": false, 00:09:40.351 "data_offset": 0, 00:09:40.351 "data_size": 65536 00:09:40.351 }, 00:09:40.351 { 00:09:40.351 "name": "BaseBdev2", 00:09:40.351 "uuid": "9e052698-8a18-4810-a89d-9c8f3fe6b5a6", 00:09:40.351 "is_configured": true, 00:09:40.351 "data_offset": 0, 00:09:40.351 "data_size": 65536 00:09:40.351 } 00:09:40.351 ] 00:09:40.351 }' 00:09:40.351 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.351 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.609 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:40.609 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:40.609 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.609 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.609 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.609 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:40.609 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.609 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:40.609 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:40.609 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:40.609 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.610 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.610 [2024-12-05 20:02:41.977545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:40.610 [2024-12-05 20:02:41.977604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:40.868 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.868 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:40.868 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:40.868 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.868 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:40.868 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.868 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.868 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.868 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:40.869 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:40.869 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:40.869 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61821 00:09:40.869 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61821 ']' 00:09:40.869 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61821 00:09:40.869 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:40.869 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.869 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61821 00:09:40.869 killing process with pid 61821 00:09:40.869 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.869 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.869 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61821' 00:09:40.869 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61821 00:09:40.869 [2024-12-05 20:02:42.167602] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.869 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61821 00:09:40.869 [2024-12-05 20:02:42.185493] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:42.247 00:09:42.247 real 0m4.962s 00:09:42.247 user 0m7.144s 00:09:42.247 sys 0m0.792s 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.247 ************************************ 00:09:42.247 END TEST raid_state_function_test 00:09:42.247 ************************************ 00:09:42.247 20:02:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:42.247 20:02:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:42.247 20:02:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.247 20:02:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.247 ************************************ 00:09:42.247 START TEST raid_state_function_test_sb 00:09:42.247 ************************************ 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:42.247 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:42.248 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:42.248 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:42.248 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:42.248 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:42.248 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:42.248 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:42.248 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62074 00:09:42.248 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62074' 00:09:42.248 Process raid pid: 62074 00:09:42.248 20:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62074 00:09:42.248 20:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62074 ']' 00:09:42.248 20:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.248 20:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.248 20:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.248 20:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.248 20:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.248 [2024-12-05 20:02:43.486386] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:42.248 [2024-12-05 20:02:43.486604] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.248 [2024-12-05 20:02:43.643058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.506 [2024-12-05 20:02:43.764142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.765 [2024-12-05 20:02:43.972539] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.765 [2024-12-05 20:02:43.972589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.024 [2024-12-05 20:02:44.337291] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.024 [2024-12-05 20:02:44.337451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.024 [2024-12-05 20:02:44.337471] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.024 [2024-12-05 20:02:44.337484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.024 "name": "Existed_Raid", 00:09:43.024 "uuid": "6ec06c3b-2053-4150-9d8a-b6af8f56697e", 00:09:43.024 "strip_size_kb": 64, 00:09:43.024 "state": "configuring", 00:09:43.024 "raid_level": "concat", 00:09:43.024 "superblock": true, 00:09:43.024 "num_base_bdevs": 2, 00:09:43.024 "num_base_bdevs_discovered": 0, 00:09:43.024 "num_base_bdevs_operational": 2, 00:09:43.024 "base_bdevs_list": [ 00:09:43.024 { 00:09:43.024 "name": "BaseBdev1", 00:09:43.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.024 "is_configured": false, 00:09:43.024 "data_offset": 0, 00:09:43.024 "data_size": 0 00:09:43.024 }, 00:09:43.024 { 00:09:43.024 "name": "BaseBdev2", 00:09:43.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.024 "is_configured": false, 00:09:43.024 "data_offset": 0, 00:09:43.024 "data_size": 0 00:09:43.024 } 00:09:43.024 ] 00:09:43.024 }' 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.024 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.600 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:43.600 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.600 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.600 [2024-12-05 20:02:44.792451] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:43.601 [2024-12-05 20:02:44.792496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.601 [2024-12-05 20:02:44.804423] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.601 [2024-12-05 20:02:44.804471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.601 [2024-12-05 20:02:44.804481] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.601 [2024-12-05 20:02:44.804493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.601 [2024-12-05 20:02:44.855772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.601 BaseBdev1 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.601 [ 00:09:43.601 { 00:09:43.601 "name": "BaseBdev1", 00:09:43.601 "aliases": [ 00:09:43.601 "eb9a4587-e512-47c2-b433-a04eb843fc21" 00:09:43.601 ], 00:09:43.601 "product_name": "Malloc disk", 00:09:43.601 "block_size": 512, 00:09:43.601 "num_blocks": 65536, 00:09:43.601 "uuid": "eb9a4587-e512-47c2-b433-a04eb843fc21", 00:09:43.601 "assigned_rate_limits": { 00:09:43.601 "rw_ios_per_sec": 0, 00:09:43.601 "rw_mbytes_per_sec": 0, 00:09:43.601 "r_mbytes_per_sec": 0, 00:09:43.601 "w_mbytes_per_sec": 0 00:09:43.601 }, 00:09:43.601 "claimed": true, 00:09:43.601 "claim_type": "exclusive_write", 00:09:43.601 "zoned": false, 00:09:43.601 "supported_io_types": { 00:09:43.601 "read": true, 00:09:43.601 "write": true, 00:09:43.601 "unmap": true, 00:09:43.601 "flush": true, 00:09:43.601 "reset": true, 00:09:43.601 "nvme_admin": false, 00:09:43.601 "nvme_io": false, 00:09:43.601 "nvme_io_md": false, 00:09:43.601 "write_zeroes": true, 00:09:43.601 "zcopy": true, 00:09:43.601 "get_zone_info": false, 00:09:43.601 "zone_management": false, 00:09:43.601 "zone_append": false, 00:09:43.601 "compare": false, 00:09:43.601 "compare_and_write": false, 00:09:43.601 "abort": true, 00:09:43.601 "seek_hole": false, 00:09:43.601 "seek_data": false, 00:09:43.601 "copy": true, 00:09:43.601 "nvme_iov_md": false 00:09:43.601 }, 00:09:43.601 "memory_domains": [ 00:09:43.601 { 00:09:43.601 "dma_device_id": "system", 00:09:43.601 "dma_device_type": 1 00:09:43.601 }, 00:09:43.601 { 00:09:43.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.601 "dma_device_type": 2 00:09:43.601 } 00:09:43.601 ], 00:09:43.601 "driver_specific": {} 00:09:43.601 } 00:09:43.601 ] 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.601 "name": "Existed_Raid", 00:09:43.601 "uuid": "e088ab93-e450-4d93-8e58-1564ef5f4f63", 00:09:43.601 "strip_size_kb": 64, 00:09:43.601 "state": "configuring", 00:09:43.601 "raid_level": "concat", 00:09:43.601 "superblock": true, 00:09:43.601 "num_base_bdevs": 2, 00:09:43.601 "num_base_bdevs_discovered": 1, 00:09:43.601 "num_base_bdevs_operational": 2, 00:09:43.601 "base_bdevs_list": [ 00:09:43.601 { 00:09:43.601 "name": "BaseBdev1", 00:09:43.601 "uuid": "eb9a4587-e512-47c2-b433-a04eb843fc21", 00:09:43.601 "is_configured": true, 00:09:43.601 "data_offset": 2048, 00:09:43.601 "data_size": 63488 00:09:43.601 }, 00:09:43.601 { 00:09:43.601 "name": "BaseBdev2", 00:09:43.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.601 "is_configured": false, 00:09:43.601 "data_offset": 0, 00:09:43.601 "data_size": 0 00:09:43.601 } 00:09:43.601 ] 00:09:43.601 }' 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.601 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.180 [2024-12-05 20:02:45.339005] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.180 [2024-12-05 20:02:45.339062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.180 [2024-12-05 20:02:45.351034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.180 [2024-12-05 20:02:45.352852] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.180 [2024-12-05 20:02:45.352910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.180 "name": "Existed_Raid", 00:09:44.180 "uuid": "a39a184c-c3bd-44bf-a86e-4891f6b4d93d", 00:09:44.180 "strip_size_kb": 64, 00:09:44.180 "state": "configuring", 00:09:44.180 "raid_level": "concat", 00:09:44.180 "superblock": true, 00:09:44.180 "num_base_bdevs": 2, 00:09:44.180 "num_base_bdevs_discovered": 1, 00:09:44.180 "num_base_bdevs_operational": 2, 00:09:44.180 "base_bdevs_list": [ 00:09:44.180 { 00:09:44.180 "name": "BaseBdev1", 00:09:44.180 "uuid": "eb9a4587-e512-47c2-b433-a04eb843fc21", 00:09:44.180 "is_configured": true, 00:09:44.180 "data_offset": 2048, 00:09:44.180 "data_size": 63488 00:09:44.180 }, 00:09:44.180 { 00:09:44.180 "name": "BaseBdev2", 00:09:44.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.180 "is_configured": false, 00:09:44.180 "data_offset": 0, 00:09:44.180 "data_size": 0 00:09:44.180 } 00:09:44.180 ] 00:09:44.180 }' 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.180 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.440 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:44.440 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.440 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.440 [2024-12-05 20:02:45.815729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.440 [2024-12-05 20:02:45.816009] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:44.440 BaseBdev2 00:09:44.440 [2024-12-05 20:02:45.816066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:44.440 [2024-12-05 20:02:45.816367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:44.440 [2024-12-05 20:02:45.816527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:44.440 [2024-12-05 20:02:45.816541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:44.440 [2024-12-05 20:02:45.816679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.440 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.440 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:44.440 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:44.440 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.440 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.441 [ 00:09:44.441 { 00:09:44.441 "name": "BaseBdev2", 00:09:44.441 "aliases": [ 00:09:44.441 "946d57df-97e8-4405-a30e-3410605ec6ed" 00:09:44.441 ], 00:09:44.441 "product_name": "Malloc disk", 00:09:44.441 "block_size": 512, 00:09:44.441 "num_blocks": 65536, 00:09:44.441 "uuid": "946d57df-97e8-4405-a30e-3410605ec6ed", 00:09:44.441 "assigned_rate_limits": { 00:09:44.441 "rw_ios_per_sec": 0, 00:09:44.441 "rw_mbytes_per_sec": 0, 00:09:44.441 "r_mbytes_per_sec": 0, 00:09:44.441 "w_mbytes_per_sec": 0 00:09:44.441 }, 00:09:44.441 "claimed": true, 00:09:44.441 "claim_type": "exclusive_write", 00:09:44.441 "zoned": false, 00:09:44.441 "supported_io_types": { 00:09:44.441 "read": true, 00:09:44.441 "write": true, 00:09:44.441 "unmap": true, 00:09:44.441 "flush": true, 00:09:44.441 "reset": true, 00:09:44.441 "nvme_admin": false, 00:09:44.441 "nvme_io": false, 00:09:44.441 "nvme_io_md": false, 00:09:44.441 "write_zeroes": true, 00:09:44.441 "zcopy": true, 00:09:44.441 "get_zone_info": false, 00:09:44.441 "zone_management": false, 00:09:44.441 "zone_append": false, 00:09:44.441 "compare": false, 00:09:44.441 "compare_and_write": false, 00:09:44.441 "abort": true, 00:09:44.441 "seek_hole": false, 00:09:44.441 "seek_data": false, 00:09:44.441 "copy": true, 00:09:44.441 "nvme_iov_md": false 00:09:44.441 }, 00:09:44.441 "memory_domains": [ 00:09:44.441 { 00:09:44.441 "dma_device_id": "system", 00:09:44.441 "dma_device_type": 1 00:09:44.441 }, 00:09:44.441 { 00:09:44.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.441 "dma_device_type": 2 00:09:44.441 } 00:09:44.441 ], 00:09:44.441 "driver_specific": {} 00:09:44.441 } 00:09:44.441 ] 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.441 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.700 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.700 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.700 "name": "Existed_Raid", 00:09:44.700 "uuid": "a39a184c-c3bd-44bf-a86e-4891f6b4d93d", 00:09:44.700 "strip_size_kb": 64, 00:09:44.700 "state": "online", 00:09:44.700 "raid_level": "concat", 00:09:44.700 "superblock": true, 00:09:44.700 "num_base_bdevs": 2, 00:09:44.700 "num_base_bdevs_discovered": 2, 00:09:44.700 "num_base_bdevs_operational": 2, 00:09:44.700 "base_bdevs_list": [ 00:09:44.700 { 00:09:44.700 "name": "BaseBdev1", 00:09:44.700 "uuid": "eb9a4587-e512-47c2-b433-a04eb843fc21", 00:09:44.700 "is_configured": true, 00:09:44.700 "data_offset": 2048, 00:09:44.700 "data_size": 63488 00:09:44.700 }, 00:09:44.700 { 00:09:44.700 "name": "BaseBdev2", 00:09:44.700 "uuid": "946d57df-97e8-4405-a30e-3410605ec6ed", 00:09:44.700 "is_configured": true, 00:09:44.700 "data_offset": 2048, 00:09:44.700 "data_size": 63488 00:09:44.700 } 00:09:44.700 ] 00:09:44.700 }' 00:09:44.700 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.700 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.959 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:44.959 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:44.959 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:44.959 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:44.959 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:44.959 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.959 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:44.959 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.959 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.959 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.959 [2024-12-05 20:02:46.299269] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.959 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.959 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.959 "name": "Existed_Raid", 00:09:44.959 "aliases": [ 00:09:44.959 "a39a184c-c3bd-44bf-a86e-4891f6b4d93d" 00:09:44.959 ], 00:09:44.959 "product_name": "Raid Volume", 00:09:44.959 "block_size": 512, 00:09:44.959 "num_blocks": 126976, 00:09:44.959 "uuid": "a39a184c-c3bd-44bf-a86e-4891f6b4d93d", 00:09:44.959 "assigned_rate_limits": { 00:09:44.959 "rw_ios_per_sec": 0, 00:09:44.959 "rw_mbytes_per_sec": 0, 00:09:44.959 "r_mbytes_per_sec": 0, 00:09:44.959 "w_mbytes_per_sec": 0 00:09:44.959 }, 00:09:44.959 "claimed": false, 00:09:44.959 "zoned": false, 00:09:44.959 "supported_io_types": { 00:09:44.959 "read": true, 00:09:44.959 "write": true, 00:09:44.959 "unmap": true, 00:09:44.959 "flush": true, 00:09:44.959 "reset": true, 00:09:44.959 "nvme_admin": false, 00:09:44.959 "nvme_io": false, 00:09:44.959 "nvme_io_md": false, 00:09:44.959 "write_zeroes": true, 00:09:44.959 "zcopy": false, 00:09:44.959 "get_zone_info": false, 00:09:44.959 "zone_management": false, 00:09:44.959 "zone_append": false, 00:09:44.959 "compare": false, 00:09:44.959 "compare_and_write": false, 00:09:44.959 "abort": false, 00:09:44.959 "seek_hole": false, 00:09:44.959 "seek_data": false, 00:09:44.959 "copy": false, 00:09:44.959 "nvme_iov_md": false 00:09:44.959 }, 00:09:44.959 "memory_domains": [ 00:09:44.959 { 00:09:44.959 "dma_device_id": "system", 00:09:44.959 "dma_device_type": 1 00:09:44.959 }, 00:09:44.959 { 00:09:44.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.959 "dma_device_type": 2 00:09:44.959 }, 00:09:44.959 { 00:09:44.959 "dma_device_id": "system", 00:09:44.959 "dma_device_type": 1 00:09:44.959 }, 00:09:44.959 { 00:09:44.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.959 "dma_device_type": 2 00:09:44.959 } 00:09:44.959 ], 00:09:44.959 "driver_specific": { 00:09:44.959 "raid": { 00:09:44.959 "uuid": "a39a184c-c3bd-44bf-a86e-4891f6b4d93d", 00:09:44.959 "strip_size_kb": 64, 00:09:44.959 "state": "online", 00:09:44.959 "raid_level": "concat", 00:09:44.959 "superblock": true, 00:09:44.959 "num_base_bdevs": 2, 00:09:44.959 "num_base_bdevs_discovered": 2, 00:09:44.959 "num_base_bdevs_operational": 2, 00:09:44.959 "base_bdevs_list": [ 00:09:44.959 { 00:09:44.959 "name": "BaseBdev1", 00:09:44.959 "uuid": "eb9a4587-e512-47c2-b433-a04eb843fc21", 00:09:44.959 "is_configured": true, 00:09:44.959 "data_offset": 2048, 00:09:44.959 "data_size": 63488 00:09:44.959 }, 00:09:44.959 { 00:09:44.959 "name": "BaseBdev2", 00:09:44.959 "uuid": "946d57df-97e8-4405-a30e-3410605ec6ed", 00:09:44.959 "is_configured": true, 00:09:44.959 "data_offset": 2048, 00:09:44.959 "data_size": 63488 00:09:44.959 } 00:09:44.959 ] 00:09:44.959 } 00:09:44.959 } 00:09:44.959 }' 00:09:44.959 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.959 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:44.959 BaseBdev2' 00:09:44.959 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.220 [2024-12-05 20:02:46.506715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:45.220 [2024-12-05 20:02:46.506751] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.220 [2024-12-05 20:02:46.506806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.220 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.480 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.480 "name": "Existed_Raid", 00:09:45.480 "uuid": "a39a184c-c3bd-44bf-a86e-4891f6b4d93d", 00:09:45.480 "strip_size_kb": 64, 00:09:45.480 "state": "offline", 00:09:45.480 "raid_level": "concat", 00:09:45.480 "superblock": true, 00:09:45.480 "num_base_bdevs": 2, 00:09:45.480 "num_base_bdevs_discovered": 1, 00:09:45.480 "num_base_bdevs_operational": 1, 00:09:45.480 "base_bdevs_list": [ 00:09:45.480 { 00:09:45.480 "name": null, 00:09:45.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.480 "is_configured": false, 00:09:45.480 "data_offset": 0, 00:09:45.480 "data_size": 63488 00:09:45.480 }, 00:09:45.480 { 00:09:45.480 "name": "BaseBdev2", 00:09:45.480 "uuid": "946d57df-97e8-4405-a30e-3410605ec6ed", 00:09:45.480 "is_configured": true, 00:09:45.480 "data_offset": 2048, 00:09:45.480 "data_size": 63488 00:09:45.480 } 00:09:45.480 ] 00:09:45.480 }' 00:09:45.480 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.480 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.740 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:45.740 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:45.740 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.740 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.740 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.740 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:45.740 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.740 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:45.740 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:45.740 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:45.740 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.740 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.740 [2024-12-05 20:02:47.074633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:45.740 [2024-12-05 20:02:47.074699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62074 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62074 ']' 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62074 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62074 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.000 killing process with pid 62074 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62074' 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62074 00:09:46.000 [2024-12-05 20:02:47.269780] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.000 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62074 00:09:46.000 [2024-12-05 20:02:47.286239] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.386 20:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:47.386 ************************************ 00:09:47.386 END TEST raid_state_function_test_sb 00:09:47.386 ************************************ 00:09:47.386 00:09:47.386 real 0m5.039s 00:09:47.386 user 0m7.272s 00:09:47.386 sys 0m0.795s 00:09:47.386 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.386 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.386 20:02:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:47.386 20:02:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:47.386 20:02:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.386 20:02:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.386 ************************************ 00:09:47.386 START TEST raid_superblock_test 00:09:47.386 ************************************ 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62321 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62321 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62321 ']' 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.386 20:02:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.386 [2024-12-05 20:02:48.580399] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:47.386 [2024-12-05 20:02:48.580991] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62321 ] 00:09:47.386 [2024-12-05 20:02:48.756282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.649 [2024-12-05 20:02:48.870800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.649 [2024-12-05 20:02:49.070149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.649 [2024-12-05 20:02:49.070298] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.218 malloc1 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.218 [2024-12-05 20:02:49.514494] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:48.218 [2024-12-05 20:02:49.514618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.218 [2024-12-05 20:02:49.514673] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:48.218 [2024-12-05 20:02:49.514709] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.218 [2024-12-05 20:02:49.517161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.218 [2024-12-05 20:02:49.517245] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:48.218 pt1 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.218 malloc2 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.218 [2024-12-05 20:02:49.572186] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:48.218 [2024-12-05 20:02:49.572243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.218 [2024-12-05 20:02:49.572285] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:48.218 [2024-12-05 20:02:49.572294] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.218 [2024-12-05 20:02:49.574412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.218 [2024-12-05 20:02:49.574452] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:48.218 pt2 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.218 [2024-12-05 20:02:49.584237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:48.218 [2024-12-05 20:02:49.586044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.218 [2024-12-05 20:02:49.586217] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:48.218 [2024-12-05 20:02:49.586230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:48.218 [2024-12-05 20:02:49.586483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:48.218 [2024-12-05 20:02:49.586640] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:48.218 [2024-12-05 20:02:49.586660] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:48.218 [2024-12-05 20:02:49.586826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.218 "name": "raid_bdev1", 00:09:48.218 "uuid": "a83880ee-026f-4720-a980-ec993d72f121", 00:09:48.218 "strip_size_kb": 64, 00:09:48.218 "state": "online", 00:09:48.218 "raid_level": "concat", 00:09:48.218 "superblock": true, 00:09:48.218 "num_base_bdevs": 2, 00:09:48.218 "num_base_bdevs_discovered": 2, 00:09:48.218 "num_base_bdevs_operational": 2, 00:09:48.218 "base_bdevs_list": [ 00:09:48.218 { 00:09:48.218 "name": "pt1", 00:09:48.218 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.218 "is_configured": true, 00:09:48.218 "data_offset": 2048, 00:09:48.218 "data_size": 63488 00:09:48.218 }, 00:09:48.218 { 00:09:48.218 "name": "pt2", 00:09:48.218 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.218 "is_configured": true, 00:09:48.218 "data_offset": 2048, 00:09:48.218 "data_size": 63488 00:09:48.218 } 00:09:48.218 ] 00:09:48.218 }' 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.218 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.786 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:48.786 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:48.786 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.786 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.786 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.786 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.786 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.786 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.786 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.786 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.786 [2024-12-05 20:02:49.991813] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:48.786 "name": "raid_bdev1", 00:09:48.786 "aliases": [ 00:09:48.786 "a83880ee-026f-4720-a980-ec993d72f121" 00:09:48.786 ], 00:09:48.786 "product_name": "Raid Volume", 00:09:48.786 "block_size": 512, 00:09:48.786 "num_blocks": 126976, 00:09:48.786 "uuid": "a83880ee-026f-4720-a980-ec993d72f121", 00:09:48.786 "assigned_rate_limits": { 00:09:48.786 "rw_ios_per_sec": 0, 00:09:48.786 "rw_mbytes_per_sec": 0, 00:09:48.786 "r_mbytes_per_sec": 0, 00:09:48.786 "w_mbytes_per_sec": 0 00:09:48.786 }, 00:09:48.786 "claimed": false, 00:09:48.786 "zoned": false, 00:09:48.786 "supported_io_types": { 00:09:48.786 "read": true, 00:09:48.786 "write": true, 00:09:48.786 "unmap": true, 00:09:48.786 "flush": true, 00:09:48.786 "reset": true, 00:09:48.786 "nvme_admin": false, 00:09:48.786 "nvme_io": false, 00:09:48.786 "nvme_io_md": false, 00:09:48.786 "write_zeroes": true, 00:09:48.786 "zcopy": false, 00:09:48.786 "get_zone_info": false, 00:09:48.786 "zone_management": false, 00:09:48.786 "zone_append": false, 00:09:48.786 "compare": false, 00:09:48.786 "compare_and_write": false, 00:09:48.786 "abort": false, 00:09:48.786 "seek_hole": false, 00:09:48.786 "seek_data": false, 00:09:48.786 "copy": false, 00:09:48.786 "nvme_iov_md": false 00:09:48.786 }, 00:09:48.786 "memory_domains": [ 00:09:48.786 { 00:09:48.786 "dma_device_id": "system", 00:09:48.786 "dma_device_type": 1 00:09:48.786 }, 00:09:48.786 { 00:09:48.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.786 "dma_device_type": 2 00:09:48.786 }, 00:09:48.786 { 00:09:48.786 "dma_device_id": "system", 00:09:48.786 "dma_device_type": 1 00:09:48.786 }, 00:09:48.786 { 00:09:48.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.786 "dma_device_type": 2 00:09:48.786 } 00:09:48.786 ], 00:09:48.786 "driver_specific": { 00:09:48.786 "raid": { 00:09:48.786 "uuid": "a83880ee-026f-4720-a980-ec993d72f121", 00:09:48.786 "strip_size_kb": 64, 00:09:48.786 "state": "online", 00:09:48.786 "raid_level": "concat", 00:09:48.786 "superblock": true, 00:09:48.786 "num_base_bdevs": 2, 00:09:48.786 "num_base_bdevs_discovered": 2, 00:09:48.786 "num_base_bdevs_operational": 2, 00:09:48.786 "base_bdevs_list": [ 00:09:48.786 { 00:09:48.786 "name": "pt1", 00:09:48.786 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.786 "is_configured": true, 00:09:48.786 "data_offset": 2048, 00:09:48.786 "data_size": 63488 00:09:48.786 }, 00:09:48.786 { 00:09:48.786 "name": "pt2", 00:09:48.786 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.786 "is_configured": true, 00:09:48.786 "data_offset": 2048, 00:09:48.786 "data_size": 63488 00:09:48.786 } 00:09:48.786 ] 00:09:48.786 } 00:09:48.786 } 00:09:48.786 }' 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:48.786 pt2' 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.786 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.046 [2024-12-05 20:02:50.231451] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a83880ee-026f-4720-a980-ec993d72f121 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a83880ee-026f-4720-a980-ec993d72f121 ']' 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.046 [2024-12-05 20:02:50.279028] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:49.046 [2024-12-05 20:02:50.279061] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.046 [2024-12-05 20:02:50.279168] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.046 [2024-12-05 20:02:50.279229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.046 [2024-12-05 20:02:50.279245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:49.046 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.047 [2024-12-05 20:02:50.402829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:49.047 [2024-12-05 20:02:50.404925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:49.047 [2024-12-05 20:02:50.404993] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:49.047 [2024-12-05 20:02:50.405051] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:49.047 [2024-12-05 20:02:50.405066] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:49.047 [2024-12-05 20:02:50.405077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:49.047 request: 00:09:49.047 { 00:09:49.047 "name": "raid_bdev1", 00:09:49.047 "raid_level": "concat", 00:09:49.047 "base_bdevs": [ 00:09:49.047 "malloc1", 00:09:49.047 "malloc2" 00:09:49.047 ], 00:09:49.047 "strip_size_kb": 64, 00:09:49.047 "superblock": false, 00:09:49.047 "method": "bdev_raid_create", 00:09:49.047 "req_id": 1 00:09:49.047 } 00:09:49.047 Got JSON-RPC error response 00:09:49.047 response: 00:09:49.047 { 00:09:49.047 "code": -17, 00:09:49.047 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:49.047 } 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.047 [2024-12-05 20:02:50.454735] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:49.047 [2024-12-05 20:02:50.454872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.047 [2024-12-05 20:02:50.454924] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:49.047 [2024-12-05 20:02:50.454963] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.047 [2024-12-05 20:02:50.457375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.047 [2024-12-05 20:02:50.457463] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:49.047 [2024-12-05 20:02:50.457585] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:49.047 [2024-12-05 20:02:50.457672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:49.047 pt1 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.047 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.305 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.305 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.305 "name": "raid_bdev1", 00:09:49.305 "uuid": "a83880ee-026f-4720-a980-ec993d72f121", 00:09:49.305 "strip_size_kb": 64, 00:09:49.305 "state": "configuring", 00:09:49.305 "raid_level": "concat", 00:09:49.305 "superblock": true, 00:09:49.305 "num_base_bdevs": 2, 00:09:49.305 "num_base_bdevs_discovered": 1, 00:09:49.305 "num_base_bdevs_operational": 2, 00:09:49.305 "base_bdevs_list": [ 00:09:49.305 { 00:09:49.305 "name": "pt1", 00:09:49.305 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.305 "is_configured": true, 00:09:49.305 "data_offset": 2048, 00:09:49.305 "data_size": 63488 00:09:49.305 }, 00:09:49.305 { 00:09:49.305 "name": null, 00:09:49.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.305 "is_configured": false, 00:09:49.305 "data_offset": 2048, 00:09:49.305 "data_size": 63488 00:09:49.305 } 00:09:49.305 ] 00:09:49.305 }' 00:09:49.305 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.305 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.563 [2024-12-05 20:02:50.925959] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:49.563 [2024-12-05 20:02:50.926040] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.563 [2024-12-05 20:02:50.926064] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:49.563 [2024-12-05 20:02:50.926076] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.563 [2024-12-05 20:02:50.926577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.563 [2024-12-05 20:02:50.926601] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:49.563 [2024-12-05 20:02:50.926688] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:49.563 [2024-12-05 20:02:50.926718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:49.563 [2024-12-05 20:02:50.926866] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:49.563 [2024-12-05 20:02:50.926880] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:49.563 [2024-12-05 20:02:50.927192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:49.563 [2024-12-05 20:02:50.927365] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:49.563 [2024-12-05 20:02:50.927376] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:49.563 [2024-12-05 20:02:50.927547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.563 pt2 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.563 "name": "raid_bdev1", 00:09:49.563 "uuid": "a83880ee-026f-4720-a980-ec993d72f121", 00:09:49.563 "strip_size_kb": 64, 00:09:49.563 "state": "online", 00:09:49.563 "raid_level": "concat", 00:09:49.563 "superblock": true, 00:09:49.563 "num_base_bdevs": 2, 00:09:49.563 "num_base_bdevs_discovered": 2, 00:09:49.563 "num_base_bdevs_operational": 2, 00:09:49.563 "base_bdevs_list": [ 00:09:49.563 { 00:09:49.563 "name": "pt1", 00:09:49.563 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.563 "is_configured": true, 00:09:49.563 "data_offset": 2048, 00:09:49.563 "data_size": 63488 00:09:49.563 }, 00:09:49.563 { 00:09:49.563 "name": "pt2", 00:09:49.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.563 "is_configured": true, 00:09:49.563 "data_offset": 2048, 00:09:49.563 "data_size": 63488 00:09:49.563 } 00:09:49.563 ] 00:09:49.563 }' 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.563 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.131 [2024-12-05 20:02:51.369449] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:50.131 "name": "raid_bdev1", 00:09:50.131 "aliases": [ 00:09:50.131 "a83880ee-026f-4720-a980-ec993d72f121" 00:09:50.131 ], 00:09:50.131 "product_name": "Raid Volume", 00:09:50.131 "block_size": 512, 00:09:50.131 "num_blocks": 126976, 00:09:50.131 "uuid": "a83880ee-026f-4720-a980-ec993d72f121", 00:09:50.131 "assigned_rate_limits": { 00:09:50.131 "rw_ios_per_sec": 0, 00:09:50.131 "rw_mbytes_per_sec": 0, 00:09:50.131 "r_mbytes_per_sec": 0, 00:09:50.131 "w_mbytes_per_sec": 0 00:09:50.131 }, 00:09:50.131 "claimed": false, 00:09:50.131 "zoned": false, 00:09:50.131 "supported_io_types": { 00:09:50.131 "read": true, 00:09:50.131 "write": true, 00:09:50.131 "unmap": true, 00:09:50.131 "flush": true, 00:09:50.131 "reset": true, 00:09:50.131 "nvme_admin": false, 00:09:50.131 "nvme_io": false, 00:09:50.131 "nvme_io_md": false, 00:09:50.131 "write_zeroes": true, 00:09:50.131 "zcopy": false, 00:09:50.131 "get_zone_info": false, 00:09:50.131 "zone_management": false, 00:09:50.131 "zone_append": false, 00:09:50.131 "compare": false, 00:09:50.131 "compare_and_write": false, 00:09:50.131 "abort": false, 00:09:50.131 "seek_hole": false, 00:09:50.131 "seek_data": false, 00:09:50.131 "copy": false, 00:09:50.131 "nvme_iov_md": false 00:09:50.131 }, 00:09:50.131 "memory_domains": [ 00:09:50.131 { 00:09:50.131 "dma_device_id": "system", 00:09:50.131 "dma_device_type": 1 00:09:50.131 }, 00:09:50.131 { 00:09:50.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.131 "dma_device_type": 2 00:09:50.131 }, 00:09:50.131 { 00:09:50.131 "dma_device_id": "system", 00:09:50.131 "dma_device_type": 1 00:09:50.131 }, 00:09:50.131 { 00:09:50.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.131 "dma_device_type": 2 00:09:50.131 } 00:09:50.131 ], 00:09:50.131 "driver_specific": { 00:09:50.131 "raid": { 00:09:50.131 "uuid": "a83880ee-026f-4720-a980-ec993d72f121", 00:09:50.131 "strip_size_kb": 64, 00:09:50.131 "state": "online", 00:09:50.131 "raid_level": "concat", 00:09:50.131 "superblock": true, 00:09:50.131 "num_base_bdevs": 2, 00:09:50.131 "num_base_bdevs_discovered": 2, 00:09:50.131 "num_base_bdevs_operational": 2, 00:09:50.131 "base_bdevs_list": [ 00:09:50.131 { 00:09:50.131 "name": "pt1", 00:09:50.131 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.131 "is_configured": true, 00:09:50.131 "data_offset": 2048, 00:09:50.131 "data_size": 63488 00:09:50.131 }, 00:09:50.131 { 00:09:50.131 "name": "pt2", 00:09:50.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.131 "is_configured": true, 00:09:50.131 "data_offset": 2048, 00:09:50.131 "data_size": 63488 00:09:50.131 } 00:09:50.131 ] 00:09:50.131 } 00:09:50.131 } 00:09:50.131 }' 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:50.131 pt2' 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.131 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.391 [2024-12-05 20:02:51.601042] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a83880ee-026f-4720-a980-ec993d72f121 '!=' a83880ee-026f-4720-a980-ec993d72f121 ']' 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62321 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62321 ']' 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62321 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62321 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62321' 00:09:50.391 killing process with pid 62321 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62321 00:09:50.391 [2024-12-05 20:02:51.672447] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.391 [2024-12-05 20:02:51.672552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.391 [2024-12-05 20:02:51.672624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.391 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62321 00:09:50.391 [2024-12-05 20:02:51.672638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:50.649 [2024-12-05 20:02:51.888376] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.611 ************************************ 00:09:51.611 END TEST raid_superblock_test 00:09:51.611 ************************************ 00:09:51.611 20:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:51.611 00:09:51.611 real 0m4.543s 00:09:51.611 user 0m6.393s 00:09:51.611 sys 0m0.714s 00:09:51.611 20:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.611 20:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.871 20:02:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:09:51.871 20:02:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:51.871 20:02:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.871 20:02:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.871 ************************************ 00:09:51.871 START TEST raid_read_error_test 00:09:51.871 ************************************ 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IEKzoyFObX 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62532 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62532 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62532 ']' 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.871 20:02:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.871 [2024-12-05 20:02:53.205012] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:51.871 [2024-12-05 20:02:53.205222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62532 ] 00:09:52.131 [2024-12-05 20:02:53.360902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.131 [2024-12-05 20:02:53.473474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.390 [2024-12-05 20:02:53.671355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.390 [2024-12-05 20:02:53.671420] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.650 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.650 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:52.650 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.650 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:52.650 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.650 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.911 BaseBdev1_malloc 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.911 true 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.911 [2024-12-05 20:02:54.111112] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:52.911 [2024-12-05 20:02:54.111172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.911 [2024-12-05 20:02:54.111196] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:52.911 [2024-12-05 20:02:54.111207] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.911 [2024-12-05 20:02:54.113619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.911 [2024-12-05 20:02:54.113667] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:52.911 BaseBdev1 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.911 BaseBdev2_malloc 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.911 true 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.911 [2024-12-05 20:02:54.177985] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:52.911 [2024-12-05 20:02:54.178082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.911 [2024-12-05 20:02:54.178103] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:52.911 [2024-12-05 20:02:54.178113] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.911 [2024-12-05 20:02:54.180257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.911 [2024-12-05 20:02:54.180308] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:52.911 BaseBdev2 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.911 [2024-12-05 20:02:54.190031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.911 [2024-12-05 20:02:54.191853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.911 [2024-12-05 20:02:54.192095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:52.911 [2024-12-05 20:02:54.192113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:52.911 [2024-12-05 20:02:54.192368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:52.911 [2024-12-05 20:02:54.192564] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:52.911 [2024-12-05 20:02:54.192579] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:52.911 [2024-12-05 20:02:54.192752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.911 "name": "raid_bdev1", 00:09:52.911 "uuid": "87efd3e0-c14d-4e40-aca5-7bdea1526067", 00:09:52.911 "strip_size_kb": 64, 00:09:52.911 "state": "online", 00:09:52.911 "raid_level": "concat", 00:09:52.911 "superblock": true, 00:09:52.911 "num_base_bdevs": 2, 00:09:52.911 "num_base_bdevs_discovered": 2, 00:09:52.911 "num_base_bdevs_operational": 2, 00:09:52.911 "base_bdevs_list": [ 00:09:52.911 { 00:09:52.911 "name": "BaseBdev1", 00:09:52.911 "uuid": "dc0b2d17-8871-5ed9-9b39-ba0a5e77ee66", 00:09:52.911 "is_configured": true, 00:09:52.911 "data_offset": 2048, 00:09:52.911 "data_size": 63488 00:09:52.911 }, 00:09:52.911 { 00:09:52.911 "name": "BaseBdev2", 00:09:52.911 "uuid": "c542703d-86c3-5441-9b3f-8688c3602b55", 00:09:52.911 "is_configured": true, 00:09:52.911 "data_offset": 2048, 00:09:52.911 "data_size": 63488 00:09:52.911 } 00:09:52.911 ] 00:09:52.911 }' 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.911 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.480 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:53.480 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:53.480 [2024-12-05 20:02:54.738518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.418 "name": "raid_bdev1", 00:09:54.418 "uuid": "87efd3e0-c14d-4e40-aca5-7bdea1526067", 00:09:54.418 "strip_size_kb": 64, 00:09:54.418 "state": "online", 00:09:54.418 "raid_level": "concat", 00:09:54.418 "superblock": true, 00:09:54.418 "num_base_bdevs": 2, 00:09:54.418 "num_base_bdevs_discovered": 2, 00:09:54.418 "num_base_bdevs_operational": 2, 00:09:54.418 "base_bdevs_list": [ 00:09:54.418 { 00:09:54.418 "name": "BaseBdev1", 00:09:54.418 "uuid": "dc0b2d17-8871-5ed9-9b39-ba0a5e77ee66", 00:09:54.418 "is_configured": true, 00:09:54.418 "data_offset": 2048, 00:09:54.418 "data_size": 63488 00:09:54.418 }, 00:09:54.418 { 00:09:54.418 "name": "BaseBdev2", 00:09:54.418 "uuid": "c542703d-86c3-5441-9b3f-8688c3602b55", 00:09:54.418 "is_configured": true, 00:09:54.418 "data_offset": 2048, 00:09:54.418 "data_size": 63488 00:09:54.418 } 00:09:54.418 ] 00:09:54.418 }' 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.418 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.676 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:54.676 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.676 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.676 [2024-12-05 20:02:56.066560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.676 [2024-12-05 20:02:56.066597] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.676 [2024-12-05 20:02:56.069270] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.676 [2024-12-05 20:02:56.069371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.676 [2024-12-05 20:02:56.069411] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.676 [2024-12-05 20:02:56.069423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:54.676 { 00:09:54.676 "results": [ 00:09:54.676 { 00:09:54.676 "job": "raid_bdev1", 00:09:54.676 "core_mask": "0x1", 00:09:54.676 "workload": "randrw", 00:09:54.676 "percentage": 50, 00:09:54.676 "status": "finished", 00:09:54.676 "queue_depth": 1, 00:09:54.676 "io_size": 131072, 00:09:54.676 "runtime": 1.328792, 00:09:54.676 "iops": 15286.8168983558, 00:09:54.676 "mibps": 1910.852112294475, 00:09:54.676 "io_failed": 1, 00:09:54.676 "io_timeout": 0, 00:09:54.676 "avg_latency_us": 90.58104131940756, 00:09:54.676 "min_latency_us": 26.717903930131005, 00:09:54.676 "max_latency_us": 1359.3711790393013 00:09:54.676 } 00:09:54.676 ], 00:09:54.676 "core_count": 1 00:09:54.676 } 00:09:54.676 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.676 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62532 00:09:54.676 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62532 ']' 00:09:54.676 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62532 00:09:54.676 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:54.676 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.676 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62532 00:09:54.935 killing process with pid 62532 00:09:54.935 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.935 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.935 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62532' 00:09:54.935 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62532 00:09:54.935 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62532 00:09:54.935 [2024-12-05 20:02:56.114791] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.935 [2024-12-05 20:02:56.257347] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.313 20:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IEKzoyFObX 00:09:56.313 20:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:56.313 20:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:56.313 20:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:56.313 20:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:56.313 20:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.313 20:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:56.313 ************************************ 00:09:56.313 END TEST raid_read_error_test 00:09:56.313 ************************************ 00:09:56.313 20:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:56.313 00:09:56.313 real 0m4.392s 00:09:56.313 user 0m5.232s 00:09:56.313 sys 0m0.553s 00:09:56.313 20:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.313 20:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.313 20:02:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:09:56.313 20:02:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:56.313 20:02:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.313 20:02:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.313 ************************************ 00:09:56.313 START TEST raid_write_error_test 00:09:56.313 ************************************ 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BkGlG3B2Mv 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62678 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62678 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62678 ']' 00:09:56.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.313 20:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:56.313 [2024-12-05 20:02:57.649991] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:56.313 [2024-12-05 20:02:57.650115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62678 ] 00:09:56.574 [2024-12-05 20:02:57.824938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.574 [2024-12-05 20:02:57.946419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.835 [2024-12-05 20:02:58.152267] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.835 [2024-12-05 20:02:58.152341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.111 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.111 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:57.111 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.111 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:57.111 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.111 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.111 BaseBdev1_malloc 00:09:57.111 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.111 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:57.111 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.111 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.111 true 00:09:57.111 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.111 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:57.112 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.112 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.112 [2024-12-05 20:02:58.537311] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:57.112 [2024-12-05 20:02:58.537423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.112 [2024-12-05 20:02:58.537447] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:57.112 [2024-12-05 20:02:58.537459] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.112 [2024-12-05 20:02:58.539628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.112 [2024-12-05 20:02:58.539669] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:57.112 BaseBdev1 00:09:57.112 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.112 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.112 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:57.112 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.112 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.371 BaseBdev2_malloc 00:09:57.371 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.371 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:57.371 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.371 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.371 true 00:09:57.371 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.371 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:57.371 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.371 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.371 [2024-12-05 20:02:58.603460] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:57.371 [2024-12-05 20:02:58.603511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.371 [2024-12-05 20:02:58.603527] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:57.371 [2024-12-05 20:02:58.603537] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.371 [2024-12-05 20:02:58.605611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.371 [2024-12-05 20:02:58.605650] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:57.371 BaseBdev2 00:09:57.371 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.371 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:57.371 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.371 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.372 [2024-12-05 20:02:58.615502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.372 [2024-12-05 20:02:58.617331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.372 [2024-12-05 20:02:58.617535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:57.372 [2024-12-05 20:02:58.617551] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:57.372 [2024-12-05 20:02:58.617773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:57.372 [2024-12-05 20:02:58.617943] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:57.372 [2024-12-05 20:02:58.617957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:57.372 [2024-12-05 20:02:58.618105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.372 "name": "raid_bdev1", 00:09:57.372 "uuid": "e32fb82e-70a5-45c3-9020-0a56d7ed4217", 00:09:57.372 "strip_size_kb": 64, 00:09:57.372 "state": "online", 00:09:57.372 "raid_level": "concat", 00:09:57.372 "superblock": true, 00:09:57.372 "num_base_bdevs": 2, 00:09:57.372 "num_base_bdevs_discovered": 2, 00:09:57.372 "num_base_bdevs_operational": 2, 00:09:57.372 "base_bdevs_list": [ 00:09:57.372 { 00:09:57.372 "name": "BaseBdev1", 00:09:57.372 "uuid": "418bad37-374e-54c2-9491-549989f9320b", 00:09:57.372 "is_configured": true, 00:09:57.372 "data_offset": 2048, 00:09:57.372 "data_size": 63488 00:09:57.372 }, 00:09:57.372 { 00:09:57.372 "name": "BaseBdev2", 00:09:57.372 "uuid": "5200ebfe-0f65-5efc-8aed-5aebf56810a8", 00:09:57.372 "is_configured": true, 00:09:57.372 "data_offset": 2048, 00:09:57.372 "data_size": 63488 00:09:57.372 } 00:09:57.372 ] 00:09:57.372 }' 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.372 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.631 20:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:57.631 20:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:57.891 [2024-12-05 20:02:59.123760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:58.831 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:58.831 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.831 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.831 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.831 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:58.831 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:58.831 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:58.831 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:58.831 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.831 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.831 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.831 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.831 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.831 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.831 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.831 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.831 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.832 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.832 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.832 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.832 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.832 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.832 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.832 "name": "raid_bdev1", 00:09:58.832 "uuid": "e32fb82e-70a5-45c3-9020-0a56d7ed4217", 00:09:58.832 "strip_size_kb": 64, 00:09:58.832 "state": "online", 00:09:58.832 "raid_level": "concat", 00:09:58.832 "superblock": true, 00:09:58.832 "num_base_bdevs": 2, 00:09:58.832 "num_base_bdevs_discovered": 2, 00:09:58.832 "num_base_bdevs_operational": 2, 00:09:58.832 "base_bdevs_list": [ 00:09:58.832 { 00:09:58.832 "name": "BaseBdev1", 00:09:58.832 "uuid": "418bad37-374e-54c2-9491-549989f9320b", 00:09:58.832 "is_configured": true, 00:09:58.832 "data_offset": 2048, 00:09:58.832 "data_size": 63488 00:09:58.832 }, 00:09:58.832 { 00:09:58.832 "name": "BaseBdev2", 00:09:58.832 "uuid": "5200ebfe-0f65-5efc-8aed-5aebf56810a8", 00:09:58.832 "is_configured": true, 00:09:58.832 "data_offset": 2048, 00:09:58.832 "data_size": 63488 00:09:58.832 } 00:09:58.832 ] 00:09:58.832 }' 00:09:58.832 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.832 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.093 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:59.093 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.093 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.093 [2024-12-05 20:03:00.476090] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:59.093 [2024-12-05 20:03:00.476177] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.093 [2024-12-05 20:03:00.478933] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.093 [2024-12-05 20:03:00.479020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.093 [2024-12-05 20:03:00.479073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.093 [2024-12-05 20:03:00.479118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:59.093 { 00:09:59.093 "results": [ 00:09:59.093 { 00:09:59.093 "job": "raid_bdev1", 00:09:59.093 "core_mask": "0x1", 00:09:59.093 "workload": "randrw", 00:09:59.093 "percentage": 50, 00:09:59.093 "status": "finished", 00:09:59.093 "queue_depth": 1, 00:09:59.093 "io_size": 131072, 00:09:59.093 "runtime": 1.353112, 00:09:59.093 "iops": 15151.000065035267, 00:09:59.093 "mibps": 1893.8750081294083, 00:09:59.093 "io_failed": 1, 00:09:59.093 "io_timeout": 0, 00:09:59.093 "avg_latency_us": 91.39163707108774, 00:09:59.093 "min_latency_us": 26.829694323144103, 00:09:59.093 "max_latency_us": 1581.1633187772925 00:09:59.093 } 00:09:59.093 ], 00:09:59.093 "core_count": 1 00:09:59.093 } 00:09:59.093 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.093 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62678 00:09:59.093 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62678 ']' 00:09:59.093 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62678 00:09:59.093 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:59.093 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.093 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62678 00:09:59.093 killing process with pid 62678 00:09:59.093 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.093 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.093 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62678' 00:09:59.093 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62678 00:09:59.093 [2024-12-05 20:03:00.514880] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.093 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62678 00:09:59.354 [2024-12-05 20:03:00.652665] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:00.735 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:00.735 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:00.735 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BkGlG3B2Mv 00:10:00.735 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:00.735 ************************************ 00:10:00.735 END TEST raid_write_error_test 00:10:00.735 ************************************ 00:10:00.735 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:00.735 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:00.735 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:00.735 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:00.735 00:10:00.735 real 0m4.339s 00:10:00.735 user 0m5.145s 00:10:00.735 sys 0m0.517s 00:10:00.735 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.735 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.735 20:03:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:00.735 20:03:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:00.735 20:03:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:00.735 20:03:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.735 20:03:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:00.735 ************************************ 00:10:00.735 START TEST raid_state_function_test 00:10:00.735 ************************************ 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62816 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62816' 00:10:00.735 Process raid pid: 62816 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62816 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62816 ']' 00:10:00.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.735 20:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.735 [2024-12-05 20:03:02.055560] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:10:00.735 [2024-12-05 20:03:02.055775] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.995 [2024-12-05 20:03:02.232387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.995 [2024-12-05 20:03:02.349138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.255 [2024-12-05 20:03:02.549711] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.255 [2024-12-05 20:03:02.549800] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.515 [2024-12-05 20:03:02.913205] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.515 [2024-12-05 20:03:02.913313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.515 [2024-12-05 20:03:02.913375] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.515 [2024-12-05 20:03:02.913400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.515 20:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.773 20:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.773 "name": "Existed_Raid", 00:10:01.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.773 "strip_size_kb": 0, 00:10:01.773 "state": "configuring", 00:10:01.773 "raid_level": "raid1", 00:10:01.773 "superblock": false, 00:10:01.773 "num_base_bdevs": 2, 00:10:01.773 "num_base_bdevs_discovered": 0, 00:10:01.773 "num_base_bdevs_operational": 2, 00:10:01.773 "base_bdevs_list": [ 00:10:01.773 { 00:10:01.773 "name": "BaseBdev1", 00:10:01.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.773 "is_configured": false, 00:10:01.773 "data_offset": 0, 00:10:01.773 "data_size": 0 00:10:01.773 }, 00:10:01.773 { 00:10:01.773 "name": "BaseBdev2", 00:10:01.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.773 "is_configured": false, 00:10:01.773 "data_offset": 0, 00:10:01.773 "data_size": 0 00:10:01.773 } 00:10:01.773 ] 00:10:01.773 }' 00:10:01.773 20:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.773 20:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.033 [2024-12-05 20:03:03.344432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.033 [2024-12-05 20:03:03.344520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.033 [2024-12-05 20:03:03.356404] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.033 [2024-12-05 20:03:03.356489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.033 [2024-12-05 20:03:03.356518] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.033 [2024-12-05 20:03:03.356544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.033 [2024-12-05 20:03:03.404667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.033 BaseBdev1 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.033 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.034 [ 00:10:02.034 { 00:10:02.034 "name": "BaseBdev1", 00:10:02.034 "aliases": [ 00:10:02.034 "77afe173-18f6-4ffe-a1ba-21dbcb1d3e26" 00:10:02.034 ], 00:10:02.034 "product_name": "Malloc disk", 00:10:02.034 "block_size": 512, 00:10:02.034 "num_blocks": 65536, 00:10:02.034 "uuid": "77afe173-18f6-4ffe-a1ba-21dbcb1d3e26", 00:10:02.034 "assigned_rate_limits": { 00:10:02.034 "rw_ios_per_sec": 0, 00:10:02.034 "rw_mbytes_per_sec": 0, 00:10:02.034 "r_mbytes_per_sec": 0, 00:10:02.034 "w_mbytes_per_sec": 0 00:10:02.034 }, 00:10:02.034 "claimed": true, 00:10:02.034 "claim_type": "exclusive_write", 00:10:02.034 "zoned": false, 00:10:02.034 "supported_io_types": { 00:10:02.034 "read": true, 00:10:02.034 "write": true, 00:10:02.034 "unmap": true, 00:10:02.034 "flush": true, 00:10:02.034 "reset": true, 00:10:02.034 "nvme_admin": false, 00:10:02.034 "nvme_io": false, 00:10:02.034 "nvme_io_md": false, 00:10:02.034 "write_zeroes": true, 00:10:02.034 "zcopy": true, 00:10:02.034 "get_zone_info": false, 00:10:02.034 "zone_management": false, 00:10:02.034 "zone_append": false, 00:10:02.034 "compare": false, 00:10:02.034 "compare_and_write": false, 00:10:02.034 "abort": true, 00:10:02.034 "seek_hole": false, 00:10:02.034 "seek_data": false, 00:10:02.034 "copy": true, 00:10:02.034 "nvme_iov_md": false 00:10:02.034 }, 00:10:02.034 "memory_domains": [ 00:10:02.034 { 00:10:02.034 "dma_device_id": "system", 00:10:02.034 "dma_device_type": 1 00:10:02.034 }, 00:10:02.034 { 00:10:02.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.034 "dma_device_type": 2 00:10:02.034 } 00:10:02.034 ], 00:10:02.034 "driver_specific": {} 00:10:02.034 } 00:10:02.034 ] 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.034 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.312 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.312 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.312 "name": "Existed_Raid", 00:10:02.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.312 "strip_size_kb": 0, 00:10:02.312 "state": "configuring", 00:10:02.312 "raid_level": "raid1", 00:10:02.312 "superblock": false, 00:10:02.312 "num_base_bdevs": 2, 00:10:02.312 "num_base_bdevs_discovered": 1, 00:10:02.312 "num_base_bdevs_operational": 2, 00:10:02.312 "base_bdevs_list": [ 00:10:02.312 { 00:10:02.312 "name": "BaseBdev1", 00:10:02.312 "uuid": "77afe173-18f6-4ffe-a1ba-21dbcb1d3e26", 00:10:02.312 "is_configured": true, 00:10:02.312 "data_offset": 0, 00:10:02.312 "data_size": 65536 00:10:02.312 }, 00:10:02.312 { 00:10:02.312 "name": "BaseBdev2", 00:10:02.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.312 "is_configured": false, 00:10:02.312 "data_offset": 0, 00:10:02.312 "data_size": 0 00:10:02.312 } 00:10:02.312 ] 00:10:02.312 }' 00:10:02.312 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.312 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.571 [2024-12-05 20:03:03.887946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.571 [2024-12-05 20:03:03.888060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.571 [2024-12-05 20:03:03.899975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.571 [2024-12-05 20:03:03.902020] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.571 [2024-12-05 20:03:03.902105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.571 "name": "Existed_Raid", 00:10:02.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.571 "strip_size_kb": 0, 00:10:02.571 "state": "configuring", 00:10:02.571 "raid_level": "raid1", 00:10:02.571 "superblock": false, 00:10:02.571 "num_base_bdevs": 2, 00:10:02.571 "num_base_bdevs_discovered": 1, 00:10:02.571 "num_base_bdevs_operational": 2, 00:10:02.571 "base_bdevs_list": [ 00:10:02.571 { 00:10:02.571 "name": "BaseBdev1", 00:10:02.571 "uuid": "77afe173-18f6-4ffe-a1ba-21dbcb1d3e26", 00:10:02.571 "is_configured": true, 00:10:02.571 "data_offset": 0, 00:10:02.571 "data_size": 65536 00:10:02.571 }, 00:10:02.571 { 00:10:02.571 "name": "BaseBdev2", 00:10:02.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.571 "is_configured": false, 00:10:02.571 "data_offset": 0, 00:10:02.571 "data_size": 0 00:10:02.571 } 00:10:02.571 ] 00:10:02.571 }' 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.571 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.138 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.138 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.138 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.139 [2024-12-05 20:03:04.365403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.139 [2024-12-05 20:03:04.365577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:03.139 [2024-12-05 20:03:04.365604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:03.139 [2024-12-05 20:03:04.365904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:03.139 [2024-12-05 20:03:04.366133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:03.139 [2024-12-05 20:03:04.366183] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:03.139 [2024-12-05 20:03:04.366488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.139 BaseBdev2 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.139 [ 00:10:03.139 { 00:10:03.139 "name": "BaseBdev2", 00:10:03.139 "aliases": [ 00:10:03.139 "1e4a1559-918f-4285-a17e-41c292f8caa3" 00:10:03.139 ], 00:10:03.139 "product_name": "Malloc disk", 00:10:03.139 "block_size": 512, 00:10:03.139 "num_blocks": 65536, 00:10:03.139 "uuid": "1e4a1559-918f-4285-a17e-41c292f8caa3", 00:10:03.139 "assigned_rate_limits": { 00:10:03.139 "rw_ios_per_sec": 0, 00:10:03.139 "rw_mbytes_per_sec": 0, 00:10:03.139 "r_mbytes_per_sec": 0, 00:10:03.139 "w_mbytes_per_sec": 0 00:10:03.139 }, 00:10:03.139 "claimed": true, 00:10:03.139 "claim_type": "exclusive_write", 00:10:03.139 "zoned": false, 00:10:03.139 "supported_io_types": { 00:10:03.139 "read": true, 00:10:03.139 "write": true, 00:10:03.139 "unmap": true, 00:10:03.139 "flush": true, 00:10:03.139 "reset": true, 00:10:03.139 "nvme_admin": false, 00:10:03.139 "nvme_io": false, 00:10:03.139 "nvme_io_md": false, 00:10:03.139 "write_zeroes": true, 00:10:03.139 "zcopy": true, 00:10:03.139 "get_zone_info": false, 00:10:03.139 "zone_management": false, 00:10:03.139 "zone_append": false, 00:10:03.139 "compare": false, 00:10:03.139 "compare_and_write": false, 00:10:03.139 "abort": true, 00:10:03.139 "seek_hole": false, 00:10:03.139 "seek_data": false, 00:10:03.139 "copy": true, 00:10:03.139 "nvme_iov_md": false 00:10:03.139 }, 00:10:03.139 "memory_domains": [ 00:10:03.139 { 00:10:03.139 "dma_device_id": "system", 00:10:03.139 "dma_device_type": 1 00:10:03.139 }, 00:10:03.139 { 00:10:03.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.139 "dma_device_type": 2 00:10:03.139 } 00:10:03.139 ], 00:10:03.139 "driver_specific": {} 00:10:03.139 } 00:10:03.139 ] 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.139 "name": "Existed_Raid", 00:10:03.139 "uuid": "7114ecb5-ce29-46fc-b005-ac2cad7a7495", 00:10:03.139 "strip_size_kb": 0, 00:10:03.139 "state": "online", 00:10:03.139 "raid_level": "raid1", 00:10:03.139 "superblock": false, 00:10:03.139 "num_base_bdevs": 2, 00:10:03.139 "num_base_bdevs_discovered": 2, 00:10:03.139 "num_base_bdevs_operational": 2, 00:10:03.139 "base_bdevs_list": [ 00:10:03.139 { 00:10:03.139 "name": "BaseBdev1", 00:10:03.139 "uuid": "77afe173-18f6-4ffe-a1ba-21dbcb1d3e26", 00:10:03.139 "is_configured": true, 00:10:03.139 "data_offset": 0, 00:10:03.139 "data_size": 65536 00:10:03.139 }, 00:10:03.139 { 00:10:03.139 "name": "BaseBdev2", 00:10:03.139 "uuid": "1e4a1559-918f-4285-a17e-41c292f8caa3", 00:10:03.139 "is_configured": true, 00:10:03.139 "data_offset": 0, 00:10:03.139 "data_size": 65536 00:10:03.139 } 00:10:03.139 ] 00:10:03.139 }' 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.139 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.399 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:03.399 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:03.399 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:03.399 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:03.399 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:03.399 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.399 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:03.399 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.399 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.399 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.399 [2024-12-05 20:03:04.824987] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.658 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.658 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.658 "name": "Existed_Raid", 00:10:03.658 "aliases": [ 00:10:03.658 "7114ecb5-ce29-46fc-b005-ac2cad7a7495" 00:10:03.658 ], 00:10:03.658 "product_name": "Raid Volume", 00:10:03.658 "block_size": 512, 00:10:03.658 "num_blocks": 65536, 00:10:03.658 "uuid": "7114ecb5-ce29-46fc-b005-ac2cad7a7495", 00:10:03.658 "assigned_rate_limits": { 00:10:03.658 "rw_ios_per_sec": 0, 00:10:03.658 "rw_mbytes_per_sec": 0, 00:10:03.658 "r_mbytes_per_sec": 0, 00:10:03.658 "w_mbytes_per_sec": 0 00:10:03.658 }, 00:10:03.658 "claimed": false, 00:10:03.658 "zoned": false, 00:10:03.658 "supported_io_types": { 00:10:03.658 "read": true, 00:10:03.658 "write": true, 00:10:03.658 "unmap": false, 00:10:03.658 "flush": false, 00:10:03.658 "reset": true, 00:10:03.658 "nvme_admin": false, 00:10:03.658 "nvme_io": false, 00:10:03.658 "nvme_io_md": false, 00:10:03.658 "write_zeroes": true, 00:10:03.658 "zcopy": false, 00:10:03.658 "get_zone_info": false, 00:10:03.658 "zone_management": false, 00:10:03.658 "zone_append": false, 00:10:03.658 "compare": false, 00:10:03.658 "compare_and_write": false, 00:10:03.658 "abort": false, 00:10:03.658 "seek_hole": false, 00:10:03.658 "seek_data": false, 00:10:03.658 "copy": false, 00:10:03.658 "nvme_iov_md": false 00:10:03.658 }, 00:10:03.658 "memory_domains": [ 00:10:03.658 { 00:10:03.658 "dma_device_id": "system", 00:10:03.658 "dma_device_type": 1 00:10:03.658 }, 00:10:03.658 { 00:10:03.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.658 "dma_device_type": 2 00:10:03.658 }, 00:10:03.658 { 00:10:03.658 "dma_device_id": "system", 00:10:03.658 "dma_device_type": 1 00:10:03.658 }, 00:10:03.658 { 00:10:03.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.658 "dma_device_type": 2 00:10:03.658 } 00:10:03.658 ], 00:10:03.658 "driver_specific": { 00:10:03.658 "raid": { 00:10:03.658 "uuid": "7114ecb5-ce29-46fc-b005-ac2cad7a7495", 00:10:03.658 "strip_size_kb": 0, 00:10:03.658 "state": "online", 00:10:03.658 "raid_level": "raid1", 00:10:03.658 "superblock": false, 00:10:03.658 "num_base_bdevs": 2, 00:10:03.658 "num_base_bdevs_discovered": 2, 00:10:03.658 "num_base_bdevs_operational": 2, 00:10:03.658 "base_bdevs_list": [ 00:10:03.658 { 00:10:03.658 "name": "BaseBdev1", 00:10:03.658 "uuid": "77afe173-18f6-4ffe-a1ba-21dbcb1d3e26", 00:10:03.658 "is_configured": true, 00:10:03.658 "data_offset": 0, 00:10:03.658 "data_size": 65536 00:10:03.658 }, 00:10:03.658 { 00:10:03.658 "name": "BaseBdev2", 00:10:03.658 "uuid": "1e4a1559-918f-4285-a17e-41c292f8caa3", 00:10:03.658 "is_configured": true, 00:10:03.658 "data_offset": 0, 00:10:03.658 "data_size": 65536 00:10:03.658 } 00:10:03.658 ] 00:10:03.658 } 00:10:03.658 } 00:10:03.658 }' 00:10:03.658 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.658 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:03.658 BaseBdev2' 00:10:03.658 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.658 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.658 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.658 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.658 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:03.658 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.658 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.658 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.658 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.658 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.658 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.658 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:03.658 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.658 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.658 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.658 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.658 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.658 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.658 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:03.658 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.658 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.658 [2024-12-05 20:03:05.060342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.917 "name": "Existed_Raid", 00:10:03.917 "uuid": "7114ecb5-ce29-46fc-b005-ac2cad7a7495", 00:10:03.917 "strip_size_kb": 0, 00:10:03.917 "state": "online", 00:10:03.917 "raid_level": "raid1", 00:10:03.917 "superblock": false, 00:10:03.917 "num_base_bdevs": 2, 00:10:03.917 "num_base_bdevs_discovered": 1, 00:10:03.917 "num_base_bdevs_operational": 1, 00:10:03.917 "base_bdevs_list": [ 00:10:03.917 { 00:10:03.917 "name": null, 00:10:03.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.917 "is_configured": false, 00:10:03.917 "data_offset": 0, 00:10:03.917 "data_size": 65536 00:10:03.917 }, 00:10:03.917 { 00:10:03.917 "name": "BaseBdev2", 00:10:03.917 "uuid": "1e4a1559-918f-4285-a17e-41c292f8caa3", 00:10:03.917 "is_configured": true, 00:10:03.917 "data_offset": 0, 00:10:03.917 "data_size": 65536 00:10:03.917 } 00:10:03.917 ] 00:10:03.917 }' 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.917 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.175 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.434 [2024-12-05 20:03:05.666077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:04.434 [2024-12-05 20:03:05.666226] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.434 [2024-12-05 20:03:05.763696] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.434 [2024-12-05 20:03:05.763795] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.434 [2024-12-05 20:03:05.763838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62816 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62816 ']' 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62816 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62816 00:10:04.434 killing process with pid 62816 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62816' 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62816 00:10:04.434 [2024-12-05 20:03:05.857335] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:04.434 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62816 00:10:04.693 [2024-12-05 20:03:05.874579] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:05.631 ************************************ 00:10:05.631 END TEST raid_state_function_test 00:10:05.631 ************************************ 00:10:05.631 20:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:05.631 00:10:05.631 real 0m5.068s 00:10:05.631 user 0m7.314s 00:10:05.631 sys 0m0.809s 00:10:05.631 20:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.631 20:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.891 20:03:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:10:05.891 20:03:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:05.891 20:03:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.891 20:03:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:05.891 ************************************ 00:10:05.891 START TEST raid_state_function_test_sb 00:10:05.891 ************************************ 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63069 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63069' 00:10:05.891 Process raid pid: 63069 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63069 00:10:05.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63069 ']' 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.891 20:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.891 [2024-12-05 20:03:07.186483] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:10:05.891 [2024-12-05 20:03:07.186699] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.151 [2024-12-05 20:03:07.361561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.151 [2024-12-05 20:03:07.479684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.410 [2024-12-05 20:03:07.696290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.410 [2024-12-05 20:03:07.696385] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.668 [2024-12-05 20:03:08.033407] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.668 [2024-12-05 20:03:08.033512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.668 [2024-12-05 20:03:08.033529] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.668 [2024-12-05 20:03:08.033557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.668 "name": "Existed_Raid", 00:10:06.668 "uuid": "9423ded8-4273-408e-86d9-8ab757ac5e9b", 00:10:06.668 "strip_size_kb": 0, 00:10:06.668 "state": "configuring", 00:10:06.668 "raid_level": "raid1", 00:10:06.668 "superblock": true, 00:10:06.668 "num_base_bdevs": 2, 00:10:06.668 "num_base_bdevs_discovered": 0, 00:10:06.668 "num_base_bdevs_operational": 2, 00:10:06.668 "base_bdevs_list": [ 00:10:06.668 { 00:10:06.668 "name": "BaseBdev1", 00:10:06.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.668 "is_configured": false, 00:10:06.668 "data_offset": 0, 00:10:06.668 "data_size": 0 00:10:06.668 }, 00:10:06.668 { 00:10:06.668 "name": "BaseBdev2", 00:10:06.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.668 "is_configured": false, 00:10:06.668 "data_offset": 0, 00:10:06.668 "data_size": 0 00:10:06.668 } 00:10:06.668 ] 00:10:06.668 }' 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.668 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.236 [2024-12-05 20:03:08.488581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.236 [2024-12-05 20:03:08.488688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.236 [2024-12-05 20:03:08.500554] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:07.236 [2024-12-05 20:03:08.500659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:07.236 [2024-12-05 20:03:08.500695] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:07.236 [2024-12-05 20:03:08.500725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.236 [2024-12-05 20:03:08.550314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.236 BaseBdev1 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.236 [ 00:10:07.236 { 00:10:07.236 "name": "BaseBdev1", 00:10:07.236 "aliases": [ 00:10:07.236 "68ac2c41-c0be-40f7-92b2-f684ac68cea9" 00:10:07.236 ], 00:10:07.236 "product_name": "Malloc disk", 00:10:07.236 "block_size": 512, 00:10:07.236 "num_blocks": 65536, 00:10:07.236 "uuid": "68ac2c41-c0be-40f7-92b2-f684ac68cea9", 00:10:07.236 "assigned_rate_limits": { 00:10:07.236 "rw_ios_per_sec": 0, 00:10:07.236 "rw_mbytes_per_sec": 0, 00:10:07.236 "r_mbytes_per_sec": 0, 00:10:07.236 "w_mbytes_per_sec": 0 00:10:07.236 }, 00:10:07.236 "claimed": true, 00:10:07.236 "claim_type": "exclusive_write", 00:10:07.236 "zoned": false, 00:10:07.236 "supported_io_types": { 00:10:07.236 "read": true, 00:10:07.236 "write": true, 00:10:07.236 "unmap": true, 00:10:07.236 "flush": true, 00:10:07.236 "reset": true, 00:10:07.236 "nvme_admin": false, 00:10:07.236 "nvme_io": false, 00:10:07.236 "nvme_io_md": false, 00:10:07.236 "write_zeroes": true, 00:10:07.236 "zcopy": true, 00:10:07.236 "get_zone_info": false, 00:10:07.236 "zone_management": false, 00:10:07.236 "zone_append": false, 00:10:07.236 "compare": false, 00:10:07.236 "compare_and_write": false, 00:10:07.236 "abort": true, 00:10:07.236 "seek_hole": false, 00:10:07.236 "seek_data": false, 00:10:07.236 "copy": true, 00:10:07.236 "nvme_iov_md": false 00:10:07.236 }, 00:10:07.236 "memory_domains": [ 00:10:07.236 { 00:10:07.236 "dma_device_id": "system", 00:10:07.236 "dma_device_type": 1 00:10:07.236 }, 00:10:07.236 { 00:10:07.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.236 "dma_device_type": 2 00:10:07.236 } 00:10:07.236 ], 00:10:07.236 "driver_specific": {} 00:10:07.236 } 00:10:07.236 ] 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.236 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.236 "name": "Existed_Raid", 00:10:07.236 "uuid": "79783b77-5242-4956-b3b0-e55ae51f4615", 00:10:07.236 "strip_size_kb": 0, 00:10:07.236 "state": "configuring", 00:10:07.236 "raid_level": "raid1", 00:10:07.236 "superblock": true, 00:10:07.236 "num_base_bdevs": 2, 00:10:07.236 "num_base_bdevs_discovered": 1, 00:10:07.236 "num_base_bdevs_operational": 2, 00:10:07.236 "base_bdevs_list": [ 00:10:07.236 { 00:10:07.236 "name": "BaseBdev1", 00:10:07.237 "uuid": "68ac2c41-c0be-40f7-92b2-f684ac68cea9", 00:10:07.237 "is_configured": true, 00:10:07.237 "data_offset": 2048, 00:10:07.237 "data_size": 63488 00:10:07.237 }, 00:10:07.237 { 00:10:07.237 "name": "BaseBdev2", 00:10:07.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.237 "is_configured": false, 00:10:07.237 "data_offset": 0, 00:10:07.237 "data_size": 0 00:10:07.237 } 00:10:07.237 ] 00:10:07.237 }' 00:10:07.237 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.237 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.805 [2024-12-05 20:03:09.065506] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.805 [2024-12-05 20:03:09.065606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.805 [2024-12-05 20:03:09.077534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.805 [2024-12-05 20:03:09.079538] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:07.805 [2024-12-05 20:03:09.079634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.805 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.805 "name": "Existed_Raid", 00:10:07.805 "uuid": "5023da4c-55e7-46ad-82bd-84cb35d8f246", 00:10:07.805 "strip_size_kb": 0, 00:10:07.805 "state": "configuring", 00:10:07.805 "raid_level": "raid1", 00:10:07.805 "superblock": true, 00:10:07.805 "num_base_bdevs": 2, 00:10:07.806 "num_base_bdevs_discovered": 1, 00:10:07.806 "num_base_bdevs_operational": 2, 00:10:07.806 "base_bdevs_list": [ 00:10:07.806 { 00:10:07.806 "name": "BaseBdev1", 00:10:07.806 "uuid": "68ac2c41-c0be-40f7-92b2-f684ac68cea9", 00:10:07.806 "is_configured": true, 00:10:07.806 "data_offset": 2048, 00:10:07.806 "data_size": 63488 00:10:07.806 }, 00:10:07.806 { 00:10:07.806 "name": "BaseBdev2", 00:10:07.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.806 "is_configured": false, 00:10:07.806 "data_offset": 0, 00:10:07.806 "data_size": 0 00:10:07.806 } 00:10:07.806 ] 00:10:07.806 }' 00:10:07.806 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.806 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.372 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:08.372 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.372 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.372 [2024-12-05 20:03:09.580884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.372 [2024-12-05 20:03:09.581309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:08.372 [2024-12-05 20:03:09.581368] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:08.372 [2024-12-05 20:03:09.581669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:08.372 BaseBdev2 00:10:08.372 [2024-12-05 20:03:09.581877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:08.372 [2024-12-05 20:03:09.581907] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:08.372 [2024-12-05 20:03:09.582057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.372 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.372 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:08.372 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:08.372 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.372 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:08.372 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.372 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.372 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.372 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.372 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.372 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.372 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:08.372 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.372 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.372 [ 00:10:08.372 { 00:10:08.372 "name": "BaseBdev2", 00:10:08.372 "aliases": [ 00:10:08.372 "e6f80fc6-d9c1-4f2b-8943-1ba8defa31ef" 00:10:08.372 ], 00:10:08.372 "product_name": "Malloc disk", 00:10:08.372 "block_size": 512, 00:10:08.372 "num_blocks": 65536, 00:10:08.372 "uuid": "e6f80fc6-d9c1-4f2b-8943-1ba8defa31ef", 00:10:08.372 "assigned_rate_limits": { 00:10:08.372 "rw_ios_per_sec": 0, 00:10:08.372 "rw_mbytes_per_sec": 0, 00:10:08.372 "r_mbytes_per_sec": 0, 00:10:08.372 "w_mbytes_per_sec": 0 00:10:08.372 }, 00:10:08.372 "claimed": true, 00:10:08.372 "claim_type": "exclusive_write", 00:10:08.372 "zoned": false, 00:10:08.372 "supported_io_types": { 00:10:08.372 "read": true, 00:10:08.373 "write": true, 00:10:08.373 "unmap": true, 00:10:08.373 "flush": true, 00:10:08.373 "reset": true, 00:10:08.373 "nvme_admin": false, 00:10:08.373 "nvme_io": false, 00:10:08.373 "nvme_io_md": false, 00:10:08.373 "write_zeroes": true, 00:10:08.373 "zcopy": true, 00:10:08.373 "get_zone_info": false, 00:10:08.373 "zone_management": false, 00:10:08.373 "zone_append": false, 00:10:08.373 "compare": false, 00:10:08.373 "compare_and_write": false, 00:10:08.373 "abort": true, 00:10:08.373 "seek_hole": false, 00:10:08.373 "seek_data": false, 00:10:08.373 "copy": true, 00:10:08.373 "nvme_iov_md": false 00:10:08.373 }, 00:10:08.373 "memory_domains": [ 00:10:08.373 { 00:10:08.373 "dma_device_id": "system", 00:10:08.373 "dma_device_type": 1 00:10:08.373 }, 00:10:08.373 { 00:10:08.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.373 "dma_device_type": 2 00:10:08.373 } 00:10:08.373 ], 00:10:08.373 "driver_specific": {} 00:10:08.373 } 00:10:08.373 ] 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.373 "name": "Existed_Raid", 00:10:08.373 "uuid": "5023da4c-55e7-46ad-82bd-84cb35d8f246", 00:10:08.373 "strip_size_kb": 0, 00:10:08.373 "state": "online", 00:10:08.373 "raid_level": "raid1", 00:10:08.373 "superblock": true, 00:10:08.373 "num_base_bdevs": 2, 00:10:08.373 "num_base_bdevs_discovered": 2, 00:10:08.373 "num_base_bdevs_operational": 2, 00:10:08.373 "base_bdevs_list": [ 00:10:08.373 { 00:10:08.373 "name": "BaseBdev1", 00:10:08.373 "uuid": "68ac2c41-c0be-40f7-92b2-f684ac68cea9", 00:10:08.373 "is_configured": true, 00:10:08.373 "data_offset": 2048, 00:10:08.373 "data_size": 63488 00:10:08.373 }, 00:10:08.373 { 00:10:08.373 "name": "BaseBdev2", 00:10:08.373 "uuid": "e6f80fc6-d9c1-4f2b-8943-1ba8defa31ef", 00:10:08.373 "is_configured": true, 00:10:08.373 "data_offset": 2048, 00:10:08.373 "data_size": 63488 00:10:08.373 } 00:10:08.373 ] 00:10:08.373 }' 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.373 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.635 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:08.635 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:08.635 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.635 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.635 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.635 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.635 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:08.635 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.635 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.635 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.901 [2024-12-05 20:03:10.068532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.901 "name": "Existed_Raid", 00:10:08.901 "aliases": [ 00:10:08.901 "5023da4c-55e7-46ad-82bd-84cb35d8f246" 00:10:08.901 ], 00:10:08.901 "product_name": "Raid Volume", 00:10:08.901 "block_size": 512, 00:10:08.901 "num_blocks": 63488, 00:10:08.901 "uuid": "5023da4c-55e7-46ad-82bd-84cb35d8f246", 00:10:08.901 "assigned_rate_limits": { 00:10:08.901 "rw_ios_per_sec": 0, 00:10:08.901 "rw_mbytes_per_sec": 0, 00:10:08.901 "r_mbytes_per_sec": 0, 00:10:08.901 "w_mbytes_per_sec": 0 00:10:08.901 }, 00:10:08.901 "claimed": false, 00:10:08.901 "zoned": false, 00:10:08.901 "supported_io_types": { 00:10:08.901 "read": true, 00:10:08.901 "write": true, 00:10:08.901 "unmap": false, 00:10:08.901 "flush": false, 00:10:08.901 "reset": true, 00:10:08.901 "nvme_admin": false, 00:10:08.901 "nvme_io": false, 00:10:08.901 "nvme_io_md": false, 00:10:08.901 "write_zeroes": true, 00:10:08.901 "zcopy": false, 00:10:08.901 "get_zone_info": false, 00:10:08.901 "zone_management": false, 00:10:08.901 "zone_append": false, 00:10:08.901 "compare": false, 00:10:08.901 "compare_and_write": false, 00:10:08.901 "abort": false, 00:10:08.901 "seek_hole": false, 00:10:08.901 "seek_data": false, 00:10:08.901 "copy": false, 00:10:08.901 "nvme_iov_md": false 00:10:08.901 }, 00:10:08.901 "memory_domains": [ 00:10:08.901 { 00:10:08.901 "dma_device_id": "system", 00:10:08.901 "dma_device_type": 1 00:10:08.901 }, 00:10:08.901 { 00:10:08.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.901 "dma_device_type": 2 00:10:08.901 }, 00:10:08.901 { 00:10:08.901 "dma_device_id": "system", 00:10:08.901 "dma_device_type": 1 00:10:08.901 }, 00:10:08.901 { 00:10:08.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.901 "dma_device_type": 2 00:10:08.901 } 00:10:08.901 ], 00:10:08.901 "driver_specific": { 00:10:08.901 "raid": { 00:10:08.901 "uuid": "5023da4c-55e7-46ad-82bd-84cb35d8f246", 00:10:08.901 "strip_size_kb": 0, 00:10:08.901 "state": "online", 00:10:08.901 "raid_level": "raid1", 00:10:08.901 "superblock": true, 00:10:08.901 "num_base_bdevs": 2, 00:10:08.901 "num_base_bdevs_discovered": 2, 00:10:08.901 "num_base_bdevs_operational": 2, 00:10:08.901 "base_bdevs_list": [ 00:10:08.901 { 00:10:08.901 "name": "BaseBdev1", 00:10:08.901 "uuid": "68ac2c41-c0be-40f7-92b2-f684ac68cea9", 00:10:08.901 "is_configured": true, 00:10:08.901 "data_offset": 2048, 00:10:08.901 "data_size": 63488 00:10:08.901 }, 00:10:08.901 { 00:10:08.901 "name": "BaseBdev2", 00:10:08.901 "uuid": "e6f80fc6-d9c1-4f2b-8943-1ba8defa31ef", 00:10:08.901 "is_configured": true, 00:10:08.901 "data_offset": 2048, 00:10:08.901 "data_size": 63488 00:10:08.901 } 00:10:08.901 ] 00:10:08.901 } 00:10:08.901 } 00:10:08.901 }' 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:08.901 BaseBdev2' 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.901 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.901 [2024-12-05 20:03:10.303912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.160 "name": "Existed_Raid", 00:10:09.160 "uuid": "5023da4c-55e7-46ad-82bd-84cb35d8f246", 00:10:09.160 "strip_size_kb": 0, 00:10:09.160 "state": "online", 00:10:09.160 "raid_level": "raid1", 00:10:09.160 "superblock": true, 00:10:09.160 "num_base_bdevs": 2, 00:10:09.160 "num_base_bdevs_discovered": 1, 00:10:09.160 "num_base_bdevs_operational": 1, 00:10:09.160 "base_bdevs_list": [ 00:10:09.160 { 00:10:09.160 "name": null, 00:10:09.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.160 "is_configured": false, 00:10:09.160 "data_offset": 0, 00:10:09.160 "data_size": 63488 00:10:09.160 }, 00:10:09.160 { 00:10:09.160 "name": "BaseBdev2", 00:10:09.160 "uuid": "e6f80fc6-d9c1-4f2b-8943-1ba8defa31ef", 00:10:09.160 "is_configured": true, 00:10:09.160 "data_offset": 2048, 00:10:09.160 "data_size": 63488 00:10:09.160 } 00:10:09.160 ] 00:10:09.160 }' 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.160 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.728 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:09.728 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.728 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.728 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.728 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.728 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.728 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.728 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.728 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.728 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:09.728 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.728 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.728 [2024-12-05 20:03:10.906236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:09.728 [2024-12-05 20:03:10.906385] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:09.728 [2024-12-05 20:03:11.009026] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.728 [2024-12-05 20:03:11.009170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.728 [2024-12-05 20:03:11.009224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63069 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63069 ']' 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63069 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63069 00:10:09.728 killing process with pid 63069 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63069' 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63069 00:10:09.728 [2024-12-05 20:03:11.102864] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:09.728 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63069 00:10:09.728 [2024-12-05 20:03:11.121202] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:11.110 20:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:11.110 00:10:11.110 real 0m5.259s 00:10:11.110 user 0m7.619s 00:10:11.110 sys 0m0.771s 00:10:11.110 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.110 ************************************ 00:10:11.110 END TEST raid_state_function_test_sb 00:10:11.110 ************************************ 00:10:11.110 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.110 20:03:12 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:10:11.110 20:03:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:11.110 20:03:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.110 20:03:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.110 ************************************ 00:10:11.110 START TEST raid_superblock_test 00:10:11.110 ************************************ 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63321 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63321 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63321 ']' 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.110 20:03:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.110 [2024-12-05 20:03:12.504182] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:10:11.110 [2024-12-05 20:03:12.504320] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63321 ] 00:10:11.369 [2024-12-05 20:03:12.680414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.629 [2024-12-05 20:03:12.808666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.629 [2024-12-05 20:03:13.028402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.629 [2024-12-05 20:03:13.028463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.199 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.199 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:12.199 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:12.199 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.199 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:12.199 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:12.199 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:12.199 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:12.199 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:12.199 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:12.199 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:12.199 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.199 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.200 malloc1 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.200 [2024-12-05 20:03:13.438373] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:12.200 [2024-12-05 20:03:13.438491] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.200 [2024-12-05 20:03:13.438538] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:12.200 [2024-12-05 20:03:13.438571] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.200 [2024-12-05 20:03:13.441170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.200 [2024-12-05 20:03:13.441270] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:12.200 pt1 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.200 malloc2 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.200 [2024-12-05 20:03:13.498700] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.200 [2024-12-05 20:03:13.498824] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.200 [2024-12-05 20:03:13.498874] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:12.200 [2024-12-05 20:03:13.498923] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.200 [2024-12-05 20:03:13.501310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.200 [2024-12-05 20:03:13.501392] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.200 pt2 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.200 [2024-12-05 20:03:13.510724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:12.200 [2024-12-05 20:03:13.512797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.200 [2024-12-05 20:03:13.513053] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:12.200 [2024-12-05 20:03:13.513120] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:12.200 [2024-12-05 20:03:13.513454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:12.200 [2024-12-05 20:03:13.513678] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:12.200 [2024-12-05 20:03:13.513738] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:12.200 [2024-12-05 20:03:13.513986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.200 "name": "raid_bdev1", 00:10:12.200 "uuid": "341526f6-aaac-4a1c-a817-3403d9716ecb", 00:10:12.200 "strip_size_kb": 0, 00:10:12.200 "state": "online", 00:10:12.200 "raid_level": "raid1", 00:10:12.200 "superblock": true, 00:10:12.200 "num_base_bdevs": 2, 00:10:12.200 "num_base_bdevs_discovered": 2, 00:10:12.200 "num_base_bdevs_operational": 2, 00:10:12.200 "base_bdevs_list": [ 00:10:12.200 { 00:10:12.200 "name": "pt1", 00:10:12.200 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.200 "is_configured": true, 00:10:12.200 "data_offset": 2048, 00:10:12.200 "data_size": 63488 00:10:12.200 }, 00:10:12.200 { 00:10:12.200 "name": "pt2", 00:10:12.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.200 "is_configured": true, 00:10:12.200 "data_offset": 2048, 00:10:12.200 "data_size": 63488 00:10:12.200 } 00:10:12.200 ] 00:10:12.200 }' 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.200 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.769 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:12.769 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:12.769 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:12.769 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:12.769 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:12.769 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:12.769 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.769 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.769 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.769 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:12.770 [2024-12-05 20:03:13.934335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.770 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.770 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:12.770 "name": "raid_bdev1", 00:10:12.770 "aliases": [ 00:10:12.770 "341526f6-aaac-4a1c-a817-3403d9716ecb" 00:10:12.770 ], 00:10:12.770 "product_name": "Raid Volume", 00:10:12.770 "block_size": 512, 00:10:12.770 "num_blocks": 63488, 00:10:12.770 "uuid": "341526f6-aaac-4a1c-a817-3403d9716ecb", 00:10:12.770 "assigned_rate_limits": { 00:10:12.770 "rw_ios_per_sec": 0, 00:10:12.770 "rw_mbytes_per_sec": 0, 00:10:12.770 "r_mbytes_per_sec": 0, 00:10:12.770 "w_mbytes_per_sec": 0 00:10:12.770 }, 00:10:12.770 "claimed": false, 00:10:12.770 "zoned": false, 00:10:12.770 "supported_io_types": { 00:10:12.770 "read": true, 00:10:12.770 "write": true, 00:10:12.770 "unmap": false, 00:10:12.770 "flush": false, 00:10:12.770 "reset": true, 00:10:12.770 "nvme_admin": false, 00:10:12.770 "nvme_io": false, 00:10:12.770 "nvme_io_md": false, 00:10:12.770 "write_zeroes": true, 00:10:12.770 "zcopy": false, 00:10:12.770 "get_zone_info": false, 00:10:12.770 "zone_management": false, 00:10:12.770 "zone_append": false, 00:10:12.770 "compare": false, 00:10:12.770 "compare_and_write": false, 00:10:12.770 "abort": false, 00:10:12.770 "seek_hole": false, 00:10:12.770 "seek_data": false, 00:10:12.770 "copy": false, 00:10:12.770 "nvme_iov_md": false 00:10:12.770 }, 00:10:12.770 "memory_domains": [ 00:10:12.770 { 00:10:12.770 "dma_device_id": "system", 00:10:12.770 "dma_device_type": 1 00:10:12.770 }, 00:10:12.770 { 00:10:12.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.770 "dma_device_type": 2 00:10:12.770 }, 00:10:12.770 { 00:10:12.770 "dma_device_id": "system", 00:10:12.770 "dma_device_type": 1 00:10:12.770 }, 00:10:12.770 { 00:10:12.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.770 "dma_device_type": 2 00:10:12.770 } 00:10:12.770 ], 00:10:12.770 "driver_specific": { 00:10:12.770 "raid": { 00:10:12.770 "uuid": "341526f6-aaac-4a1c-a817-3403d9716ecb", 00:10:12.770 "strip_size_kb": 0, 00:10:12.770 "state": "online", 00:10:12.770 "raid_level": "raid1", 00:10:12.770 "superblock": true, 00:10:12.770 "num_base_bdevs": 2, 00:10:12.770 "num_base_bdevs_discovered": 2, 00:10:12.770 "num_base_bdevs_operational": 2, 00:10:12.770 "base_bdevs_list": [ 00:10:12.770 { 00:10:12.770 "name": "pt1", 00:10:12.770 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.770 "is_configured": true, 00:10:12.770 "data_offset": 2048, 00:10:12.770 "data_size": 63488 00:10:12.770 }, 00:10:12.770 { 00:10:12.770 "name": "pt2", 00:10:12.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.770 "is_configured": true, 00:10:12.770 "data_offset": 2048, 00:10:12.770 "data_size": 63488 00:10:12.770 } 00:10:12.770 ] 00:10:12.770 } 00:10:12.770 } 00:10:12.770 }' 00:10:12.770 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:12.770 pt2' 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:12.770 [2024-12-05 20:03:14.157959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=341526f6-aaac-4a1c-a817-3403d9716ecb 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 341526f6-aaac-4a1c-a817-3403d9716ecb ']' 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.770 [2024-12-05 20:03:14.193524] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:12.770 [2024-12-05 20:03:14.193605] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.770 [2024-12-05 20:03:14.193722] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.770 [2024-12-05 20:03:14.193819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.770 [2024-12-05 20:03:14.193881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.770 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.030 [2024-12-05 20:03:14.333373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:13.030 [2024-12-05 20:03:14.335532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:13.030 [2024-12-05 20:03:14.335656] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:13.030 [2024-12-05 20:03:14.335765] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:13.030 [2024-12-05 20:03:14.335844] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.030 [2024-12-05 20:03:14.335899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:13.030 request: 00:10:13.030 { 00:10:13.030 "name": "raid_bdev1", 00:10:13.030 "raid_level": "raid1", 00:10:13.030 "base_bdevs": [ 00:10:13.030 "malloc1", 00:10:13.030 "malloc2" 00:10:13.030 ], 00:10:13.030 "superblock": false, 00:10:13.030 "method": "bdev_raid_create", 00:10:13.030 "req_id": 1 00:10:13.030 } 00:10:13.030 Got JSON-RPC error response 00:10:13.030 response: 00:10:13.030 { 00:10:13.030 "code": -17, 00:10:13.030 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:13.030 } 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.030 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.031 [2024-12-05 20:03:14.393211] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:13.031 [2024-12-05 20:03:14.393320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.031 [2024-12-05 20:03:14.393362] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:13.031 [2024-12-05 20:03:14.393414] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.031 [2024-12-05 20:03:14.395848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.031 [2024-12-05 20:03:14.395940] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:13.031 [2024-12-05 20:03:14.396066] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:13.031 [2024-12-05 20:03:14.396206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:13.031 pt1 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.031 "name": "raid_bdev1", 00:10:13.031 "uuid": "341526f6-aaac-4a1c-a817-3403d9716ecb", 00:10:13.031 "strip_size_kb": 0, 00:10:13.031 "state": "configuring", 00:10:13.031 "raid_level": "raid1", 00:10:13.031 "superblock": true, 00:10:13.031 "num_base_bdevs": 2, 00:10:13.031 "num_base_bdevs_discovered": 1, 00:10:13.031 "num_base_bdevs_operational": 2, 00:10:13.031 "base_bdevs_list": [ 00:10:13.031 { 00:10:13.031 "name": "pt1", 00:10:13.031 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.031 "is_configured": true, 00:10:13.031 "data_offset": 2048, 00:10:13.031 "data_size": 63488 00:10:13.031 }, 00:10:13.031 { 00:10:13.031 "name": null, 00:10:13.031 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.031 "is_configured": false, 00:10:13.031 "data_offset": 2048, 00:10:13.031 "data_size": 63488 00:10:13.031 } 00:10:13.031 ] 00:10:13.031 }' 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.031 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.600 [2024-12-05 20:03:14.812580] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:13.600 [2024-12-05 20:03:14.812715] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.600 [2024-12-05 20:03:14.812764] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:13.600 [2024-12-05 20:03:14.812827] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.600 [2024-12-05 20:03:14.813428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.600 [2024-12-05 20:03:14.813510] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:13.600 [2024-12-05 20:03:14.813651] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:13.600 [2024-12-05 20:03:14.813720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:13.600 [2024-12-05 20:03:14.813936] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:13.600 [2024-12-05 20:03:14.813989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:13.600 [2024-12-05 20:03:14.814302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:13.600 [2024-12-05 20:03:14.814528] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:13.600 [2024-12-05 20:03:14.814578] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:13.600 [2024-12-05 20:03:14.814791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.600 pt2 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.600 "name": "raid_bdev1", 00:10:13.600 "uuid": "341526f6-aaac-4a1c-a817-3403d9716ecb", 00:10:13.600 "strip_size_kb": 0, 00:10:13.600 "state": "online", 00:10:13.600 "raid_level": "raid1", 00:10:13.600 "superblock": true, 00:10:13.600 "num_base_bdevs": 2, 00:10:13.600 "num_base_bdevs_discovered": 2, 00:10:13.600 "num_base_bdevs_operational": 2, 00:10:13.600 "base_bdevs_list": [ 00:10:13.600 { 00:10:13.600 "name": "pt1", 00:10:13.600 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.600 "is_configured": true, 00:10:13.600 "data_offset": 2048, 00:10:13.600 "data_size": 63488 00:10:13.600 }, 00:10:13.600 { 00:10:13.600 "name": "pt2", 00:10:13.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.600 "is_configured": true, 00:10:13.600 "data_offset": 2048, 00:10:13.600 "data_size": 63488 00:10:13.600 } 00:10:13.600 ] 00:10:13.600 }' 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.600 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.858 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:13.858 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:13.858 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.858 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.858 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.858 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.858 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.858 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.858 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.858 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.858 [2024-12-05 20:03:15.268158] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.858 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.116 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.116 "name": "raid_bdev1", 00:10:14.116 "aliases": [ 00:10:14.116 "341526f6-aaac-4a1c-a817-3403d9716ecb" 00:10:14.116 ], 00:10:14.116 "product_name": "Raid Volume", 00:10:14.116 "block_size": 512, 00:10:14.116 "num_blocks": 63488, 00:10:14.116 "uuid": "341526f6-aaac-4a1c-a817-3403d9716ecb", 00:10:14.116 "assigned_rate_limits": { 00:10:14.116 "rw_ios_per_sec": 0, 00:10:14.116 "rw_mbytes_per_sec": 0, 00:10:14.116 "r_mbytes_per_sec": 0, 00:10:14.116 "w_mbytes_per_sec": 0 00:10:14.116 }, 00:10:14.116 "claimed": false, 00:10:14.116 "zoned": false, 00:10:14.116 "supported_io_types": { 00:10:14.116 "read": true, 00:10:14.116 "write": true, 00:10:14.116 "unmap": false, 00:10:14.116 "flush": false, 00:10:14.116 "reset": true, 00:10:14.116 "nvme_admin": false, 00:10:14.116 "nvme_io": false, 00:10:14.116 "nvme_io_md": false, 00:10:14.116 "write_zeroes": true, 00:10:14.116 "zcopy": false, 00:10:14.116 "get_zone_info": false, 00:10:14.116 "zone_management": false, 00:10:14.116 "zone_append": false, 00:10:14.116 "compare": false, 00:10:14.116 "compare_and_write": false, 00:10:14.116 "abort": false, 00:10:14.116 "seek_hole": false, 00:10:14.116 "seek_data": false, 00:10:14.116 "copy": false, 00:10:14.116 "nvme_iov_md": false 00:10:14.116 }, 00:10:14.116 "memory_domains": [ 00:10:14.116 { 00:10:14.116 "dma_device_id": "system", 00:10:14.116 "dma_device_type": 1 00:10:14.116 }, 00:10:14.116 { 00:10:14.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.116 "dma_device_type": 2 00:10:14.116 }, 00:10:14.116 { 00:10:14.116 "dma_device_id": "system", 00:10:14.116 "dma_device_type": 1 00:10:14.116 }, 00:10:14.116 { 00:10:14.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.116 "dma_device_type": 2 00:10:14.116 } 00:10:14.116 ], 00:10:14.116 "driver_specific": { 00:10:14.116 "raid": { 00:10:14.116 "uuid": "341526f6-aaac-4a1c-a817-3403d9716ecb", 00:10:14.116 "strip_size_kb": 0, 00:10:14.116 "state": "online", 00:10:14.116 "raid_level": "raid1", 00:10:14.117 "superblock": true, 00:10:14.117 "num_base_bdevs": 2, 00:10:14.117 "num_base_bdevs_discovered": 2, 00:10:14.117 "num_base_bdevs_operational": 2, 00:10:14.117 "base_bdevs_list": [ 00:10:14.117 { 00:10:14.117 "name": "pt1", 00:10:14.117 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.117 "is_configured": true, 00:10:14.117 "data_offset": 2048, 00:10:14.117 "data_size": 63488 00:10:14.117 }, 00:10:14.117 { 00:10:14.117 "name": "pt2", 00:10:14.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.117 "is_configured": true, 00:10:14.117 "data_offset": 2048, 00:10:14.117 "data_size": 63488 00:10:14.117 } 00:10:14.117 ] 00:10:14.117 } 00:10:14.117 } 00:10:14.117 }' 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:14.117 pt2' 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:14.117 [2024-12-05 20:03:15.519722] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.117 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.375 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 341526f6-aaac-4a1c-a817-3403d9716ecb '!=' 341526f6-aaac-4a1c-a817-3403d9716ecb ']' 00:10:14.375 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:14.375 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:14.375 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:14.375 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:14.375 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.375 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.375 [2024-12-05 20:03:15.567384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.376 "name": "raid_bdev1", 00:10:14.376 "uuid": "341526f6-aaac-4a1c-a817-3403d9716ecb", 00:10:14.376 "strip_size_kb": 0, 00:10:14.376 "state": "online", 00:10:14.376 "raid_level": "raid1", 00:10:14.376 "superblock": true, 00:10:14.376 "num_base_bdevs": 2, 00:10:14.376 "num_base_bdevs_discovered": 1, 00:10:14.376 "num_base_bdevs_operational": 1, 00:10:14.376 "base_bdevs_list": [ 00:10:14.376 { 00:10:14.376 "name": null, 00:10:14.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.376 "is_configured": false, 00:10:14.376 "data_offset": 0, 00:10:14.376 "data_size": 63488 00:10:14.376 }, 00:10:14.376 { 00:10:14.376 "name": "pt2", 00:10:14.376 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.376 "is_configured": true, 00:10:14.376 "data_offset": 2048, 00:10:14.376 "data_size": 63488 00:10:14.376 } 00:10:14.376 ] 00:10:14.376 }' 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.376 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.635 [2024-12-05 20:03:16.010611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:14.635 [2024-12-05 20:03:16.010694] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.635 [2024-12-05 20:03:16.010803] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.635 [2024-12-05 20:03:16.010895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.635 [2024-12-05 20:03:16.010974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.635 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.635 [2024-12-05 20:03:16.070496] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:14.894 [2024-12-05 20:03:16.070610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.894 [2024-12-05 20:03:16.070634] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:14.894 [2024-12-05 20:03:16.070646] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.894 [2024-12-05 20:03:16.073182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.894 [2024-12-05 20:03:16.073230] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:14.894 [2024-12-05 20:03:16.073325] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:14.894 [2024-12-05 20:03:16.073388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:14.894 [2024-12-05 20:03:16.073506] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:14.894 [2024-12-05 20:03:16.073520] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:14.894 [2024-12-05 20:03:16.073780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:14.894 [2024-12-05 20:03:16.073973] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:14.894 [2024-12-05 20:03:16.073985] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:14.894 [2024-12-05 20:03:16.074155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.894 pt2 00:10:14.894 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.894 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:14.894 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.894 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.894 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.894 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.894 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:14.894 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.894 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.894 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.894 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.894 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.894 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.894 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.895 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.895 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.895 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.895 "name": "raid_bdev1", 00:10:14.895 "uuid": "341526f6-aaac-4a1c-a817-3403d9716ecb", 00:10:14.895 "strip_size_kb": 0, 00:10:14.895 "state": "online", 00:10:14.895 "raid_level": "raid1", 00:10:14.895 "superblock": true, 00:10:14.895 "num_base_bdevs": 2, 00:10:14.895 "num_base_bdevs_discovered": 1, 00:10:14.895 "num_base_bdevs_operational": 1, 00:10:14.895 "base_bdevs_list": [ 00:10:14.895 { 00:10:14.895 "name": null, 00:10:14.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.895 "is_configured": false, 00:10:14.895 "data_offset": 2048, 00:10:14.895 "data_size": 63488 00:10:14.895 }, 00:10:14.895 { 00:10:14.895 "name": "pt2", 00:10:14.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.895 "is_configured": true, 00:10:14.895 "data_offset": 2048, 00:10:14.895 "data_size": 63488 00:10:14.895 } 00:10:14.895 ] 00:10:14.895 }' 00:10:14.895 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.895 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.154 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:15.154 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.154 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.154 [2024-12-05 20:03:16.517720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.154 [2024-12-05 20:03:16.517809] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.154 [2024-12-05 20:03:16.517934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.154 [2024-12-05 20:03:16.518016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.154 [2024-12-05 20:03:16.518066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:15.154 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.154 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.154 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.154 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.154 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:15.154 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.154 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:15.154 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:15.154 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:10:15.154 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:15.154 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.154 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.154 [2024-12-05 20:03:16.581655] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:15.154 [2024-12-05 20:03:16.581795] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.154 [2024-12-05 20:03:16.581835] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:10:15.154 [2024-12-05 20:03:16.581863] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.154 [2024-12-05 20:03:16.584423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.154 [2024-12-05 20:03:16.584515] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:15.154 [2024-12-05 20:03:16.584669] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:15.154 [2024-12-05 20:03:16.584754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:15.154 [2024-12-05 20:03:16.584996] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:15.154 [2024-12-05 20:03:16.585064] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.154 [2024-12-05 20:03:16.585110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:15.154 [2024-12-05 20:03:16.585211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:15.154 [2024-12-05 20:03:16.585339] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:15.154 [2024-12-05 20:03:16.585383] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:15.154 [2024-12-05 20:03:16.585716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:15.154 [2024-12-05 20:03:16.585970] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:15.154 [2024-12-05 20:03:16.586026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:15.154 [2024-12-05 20:03:16.586304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.154 pt1 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.414 "name": "raid_bdev1", 00:10:15.414 "uuid": "341526f6-aaac-4a1c-a817-3403d9716ecb", 00:10:15.414 "strip_size_kb": 0, 00:10:15.414 "state": "online", 00:10:15.414 "raid_level": "raid1", 00:10:15.414 "superblock": true, 00:10:15.414 "num_base_bdevs": 2, 00:10:15.414 "num_base_bdevs_discovered": 1, 00:10:15.414 "num_base_bdevs_operational": 1, 00:10:15.414 "base_bdevs_list": [ 00:10:15.414 { 00:10:15.414 "name": null, 00:10:15.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.414 "is_configured": false, 00:10:15.414 "data_offset": 2048, 00:10:15.414 "data_size": 63488 00:10:15.414 }, 00:10:15.414 { 00:10:15.414 "name": "pt2", 00:10:15.414 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.414 "is_configured": true, 00:10:15.414 "data_offset": 2048, 00:10:15.414 "data_size": 63488 00:10:15.414 } 00:10:15.414 ] 00:10:15.414 }' 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.414 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.675 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:15.675 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.675 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.675 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:15.675 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.675 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:15.675 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:15.675 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.675 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.675 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.675 [2024-12-05 20:03:17.037666] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.675 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.675 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 341526f6-aaac-4a1c-a817-3403d9716ecb '!=' 341526f6-aaac-4a1c-a817-3403d9716ecb ']' 00:10:15.675 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63321 00:10:15.675 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63321 ']' 00:10:15.675 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63321 00:10:15.675 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:15.675 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.675 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63321 00:10:15.675 killing process with pid 63321 00:10:15.675 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.675 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.675 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63321' 00:10:15.675 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63321 00:10:15.675 [2024-12-05 20:03:17.107524] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.675 [2024-12-05 20:03:17.107623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.675 [2024-12-05 20:03:17.107678] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63321 00:10:15.675 ee all in destruct 00:10:15.675 [2024-12-05 20:03:17.107694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:15.936 [2024-12-05 20:03:17.319228] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:17.320 20:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:17.320 00:10:17.320 real 0m6.053s 00:10:17.320 user 0m9.147s 00:10:17.320 sys 0m0.972s 00:10:17.320 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.320 ************************************ 00:10:17.320 END TEST raid_superblock_test 00:10:17.320 ************************************ 00:10:17.320 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.320 20:03:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:10:17.320 20:03:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:17.320 20:03:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.320 20:03:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:17.320 ************************************ 00:10:17.320 START TEST raid_read_error_test 00:10:17.320 ************************************ 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IYJytc9Y80 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63646 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63646 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63646 ']' 00:10:17.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.320 20:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.320 [2024-12-05 20:03:18.643027] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:10:17.320 [2024-12-05 20:03:18.643148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63646 ] 00:10:17.580 [2024-12-05 20:03:18.816808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.580 [2024-12-05 20:03:18.933191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.840 [2024-12-05 20:03:19.146535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.840 [2024-12-05 20:03:19.146601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.100 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.100 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:18.100 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:18.101 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:18.101 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.101 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.101 BaseBdev1_malloc 00:10:18.101 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.101 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:18.101 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.101 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.362 true 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.362 [2024-12-05 20:03:19.541850] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:18.362 [2024-12-05 20:03:19.541918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.362 [2024-12-05 20:03:19.541953] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:18.362 [2024-12-05 20:03:19.541964] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.362 [2024-12-05 20:03:19.544183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.362 [2024-12-05 20:03:19.544228] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:18.362 BaseBdev1 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.362 BaseBdev2_malloc 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.362 true 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.362 [2024-12-05 20:03:19.598726] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:18.362 [2024-12-05 20:03:19.598782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.362 [2024-12-05 20:03:19.598799] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:18.362 [2024-12-05 20:03:19.598810] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.362 [2024-12-05 20:03:19.601169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.362 [2024-12-05 20:03:19.601210] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:18.362 BaseBdev2 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.362 [2024-12-05 20:03:19.606769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.362 [2024-12-05 20:03:19.608821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.362 [2024-12-05 20:03:19.609149] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:18.362 [2024-12-05 20:03:19.609175] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:18.362 [2024-12-05 20:03:19.609454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:18.362 [2024-12-05 20:03:19.609662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:18.362 [2024-12-05 20:03:19.609675] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:18.362 [2024-12-05 20:03:19.609831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.362 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.363 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.363 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.363 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.363 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.363 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.363 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.363 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.363 "name": "raid_bdev1", 00:10:18.363 "uuid": "848b6713-d528-441f-aa57-d13910e0ee38", 00:10:18.363 "strip_size_kb": 0, 00:10:18.363 "state": "online", 00:10:18.363 "raid_level": "raid1", 00:10:18.363 "superblock": true, 00:10:18.363 "num_base_bdevs": 2, 00:10:18.363 "num_base_bdevs_discovered": 2, 00:10:18.363 "num_base_bdevs_operational": 2, 00:10:18.363 "base_bdevs_list": [ 00:10:18.363 { 00:10:18.363 "name": "BaseBdev1", 00:10:18.363 "uuid": "db1deba1-c4e0-5e3b-8bea-e051a857989a", 00:10:18.363 "is_configured": true, 00:10:18.363 "data_offset": 2048, 00:10:18.363 "data_size": 63488 00:10:18.363 }, 00:10:18.363 { 00:10:18.363 "name": "BaseBdev2", 00:10:18.363 "uuid": "814cc746-3a12-55c6-bea9-03ba2e2311f1", 00:10:18.363 "is_configured": true, 00:10:18.363 "data_offset": 2048, 00:10:18.363 "data_size": 63488 00:10:18.363 } 00:10:18.363 ] 00:10:18.363 }' 00:10:18.363 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.363 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.931 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:18.931 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:18.931 [2024-12-05 20:03:20.187177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.869 "name": "raid_bdev1", 00:10:19.869 "uuid": "848b6713-d528-441f-aa57-d13910e0ee38", 00:10:19.869 "strip_size_kb": 0, 00:10:19.869 "state": "online", 00:10:19.869 "raid_level": "raid1", 00:10:19.869 "superblock": true, 00:10:19.869 "num_base_bdevs": 2, 00:10:19.869 "num_base_bdevs_discovered": 2, 00:10:19.869 "num_base_bdevs_operational": 2, 00:10:19.869 "base_bdevs_list": [ 00:10:19.869 { 00:10:19.869 "name": "BaseBdev1", 00:10:19.869 "uuid": "db1deba1-c4e0-5e3b-8bea-e051a857989a", 00:10:19.869 "is_configured": true, 00:10:19.869 "data_offset": 2048, 00:10:19.869 "data_size": 63488 00:10:19.869 }, 00:10:19.869 { 00:10:19.869 "name": "BaseBdev2", 00:10:19.869 "uuid": "814cc746-3a12-55c6-bea9-03ba2e2311f1", 00:10:19.869 "is_configured": true, 00:10:19.869 "data_offset": 2048, 00:10:19.869 "data_size": 63488 00:10:19.869 } 00:10:19.869 ] 00:10:19.869 }' 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.869 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.128 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:20.128 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.128 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.128 [2024-12-05 20:03:21.558018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.128 [2024-12-05 20:03:21.558128] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.128 [2024-12-05 20:03:21.561579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.128 [2024-12-05 20:03:21.561718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.128 [2024-12-05 20:03:21.561871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.128 [2024-12-05 20:03:21.561989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:20.128 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.128 { 00:10:20.128 "results": [ 00:10:20.128 { 00:10:20.128 "job": "raid_bdev1", 00:10:20.128 "core_mask": "0x1", 00:10:20.128 "workload": "randrw", 00:10:20.128 "percentage": 50, 00:10:20.128 "status": "finished", 00:10:20.128 "queue_depth": 1, 00:10:20.128 "io_size": 131072, 00:10:20.128 "runtime": 1.371823, 00:10:20.128 "iops": 16330.095063284403, 00:10:20.128 "mibps": 2041.2618829105504, 00:10:20.128 "io_failed": 0, 00:10:20.128 "io_timeout": 0, 00:10:20.128 "avg_latency_us": 58.26938751959529, 00:10:20.128 "min_latency_us": 24.593886462882097, 00:10:20.128 "max_latency_us": 1645.5545851528384 00:10:20.128 } 00:10:20.128 ], 00:10:20.128 "core_count": 1 00:10:20.128 } 00:10:20.128 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63646 00:10:20.128 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63646 ']' 00:10:20.128 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63646 00:10:20.386 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:20.386 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.386 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63646 00:10:20.386 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.386 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.386 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63646' 00:10:20.386 killing process with pid 63646 00:10:20.386 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63646 00:10:20.386 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63646 00:10:20.386 [2024-12-05 20:03:21.602917] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.386 [2024-12-05 20:03:21.739802] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.765 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IYJytc9Y80 00:10:21.765 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:21.765 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:21.765 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:21.765 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:21.765 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:21.765 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:21.765 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:21.765 00:10:21.765 real 0m4.436s 00:10:21.765 user 0m5.348s 00:10:21.765 sys 0m0.540s 00:10:21.765 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.765 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.765 ************************************ 00:10:21.765 END TEST raid_read_error_test 00:10:21.765 ************************************ 00:10:21.765 20:03:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:10:21.765 20:03:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:21.765 20:03:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.765 20:03:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.765 ************************************ 00:10:21.765 START TEST raid_write_error_test 00:10:21.765 ************************************ 00:10:21.765 20:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:10:21.765 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:21.765 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:21.765 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:21.765 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:21.765 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.765 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:21.765 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.765 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.765 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:21.765 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Iwh6mdmINt 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63796 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63796 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63796 ']' 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.766 20:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.766 [2024-12-05 20:03:23.143697] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:10:21.766 [2024-12-05 20:03:23.143912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63796 ] 00:10:22.025 [2024-12-05 20:03:23.320644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.025 [2024-12-05 20:03:23.431597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.285 [2024-12-05 20:03:23.632727] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.285 [2024-12-05 20:03:23.632867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.544 20:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.544 20:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:22.544 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.544 20:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:22.544 20:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.544 20:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.804 BaseBdev1_malloc 00:10:22.804 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.804 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:22.804 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.804 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.804 true 00:10:22.804 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.804 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:22.804 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.804 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.804 [2024-12-05 20:03:24.038790] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:22.804 [2024-12-05 20:03:24.038856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.804 [2024-12-05 20:03:24.038881] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:22.804 [2024-12-05 20:03:24.038908] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.804 [2024-12-05 20:03:24.041283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.804 [2024-12-05 20:03:24.041389] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:22.804 BaseBdev1 00:10:22.804 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.804 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.804 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:22.804 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.804 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.805 BaseBdev2_malloc 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.805 true 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.805 [2024-12-05 20:03:24.107194] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:22.805 [2024-12-05 20:03:24.107248] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.805 [2024-12-05 20:03:24.107280] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:22.805 [2024-12-05 20:03:24.107291] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.805 [2024-12-05 20:03:24.109491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.805 [2024-12-05 20:03:24.109533] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:22.805 BaseBdev2 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.805 [2024-12-05 20:03:24.119227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.805 [2024-12-05 20:03:24.121015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.805 [2024-12-05 20:03:24.121216] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:22.805 [2024-12-05 20:03:24.121232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:22.805 [2024-12-05 20:03:24.121470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:22.805 [2024-12-05 20:03:24.121662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:22.805 [2024-12-05 20:03:24.121672] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:22.805 [2024-12-05 20:03:24.121823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.805 "name": "raid_bdev1", 00:10:22.805 "uuid": "136d403a-a1e6-4dc8-8985-89d576e5a597", 00:10:22.805 "strip_size_kb": 0, 00:10:22.805 "state": "online", 00:10:22.805 "raid_level": "raid1", 00:10:22.805 "superblock": true, 00:10:22.805 "num_base_bdevs": 2, 00:10:22.805 "num_base_bdevs_discovered": 2, 00:10:22.805 "num_base_bdevs_operational": 2, 00:10:22.805 "base_bdevs_list": [ 00:10:22.805 { 00:10:22.805 "name": "BaseBdev1", 00:10:22.805 "uuid": "9b5b1f50-9047-5f75-890f-4378b8a0e5c7", 00:10:22.805 "is_configured": true, 00:10:22.805 "data_offset": 2048, 00:10:22.805 "data_size": 63488 00:10:22.805 }, 00:10:22.805 { 00:10:22.805 "name": "BaseBdev2", 00:10:22.805 "uuid": "e7e890cc-a737-56b1-8ff6-d5c15a8a3c04", 00:10:22.805 "is_configured": true, 00:10:22.805 "data_offset": 2048, 00:10:22.805 "data_size": 63488 00:10:22.805 } 00:10:22.805 ] 00:10:22.805 }' 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.805 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.374 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:23.374 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:23.374 [2024-12-05 20:03:24.699694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.313 [2024-12-05 20:03:25.612483] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:24.313 [2024-12-05 20:03:25.612550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:24.313 [2024-12-05 20:03:25.612755] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.313 "name": "raid_bdev1", 00:10:24.313 "uuid": "136d403a-a1e6-4dc8-8985-89d576e5a597", 00:10:24.313 "strip_size_kb": 0, 00:10:24.313 "state": "online", 00:10:24.313 "raid_level": "raid1", 00:10:24.313 "superblock": true, 00:10:24.313 "num_base_bdevs": 2, 00:10:24.313 "num_base_bdevs_discovered": 1, 00:10:24.313 "num_base_bdevs_operational": 1, 00:10:24.313 "base_bdevs_list": [ 00:10:24.313 { 00:10:24.313 "name": null, 00:10:24.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.313 "is_configured": false, 00:10:24.313 "data_offset": 0, 00:10:24.313 "data_size": 63488 00:10:24.313 }, 00:10:24.313 { 00:10:24.313 "name": "BaseBdev2", 00:10:24.313 "uuid": "e7e890cc-a737-56b1-8ff6-d5c15a8a3c04", 00:10:24.313 "is_configured": true, 00:10:24.313 "data_offset": 2048, 00:10:24.313 "data_size": 63488 00:10:24.313 } 00:10:24.313 ] 00:10:24.313 }' 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.313 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.881 20:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:24.881 20:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.881 20:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.881 [2024-12-05 20:03:26.131252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.881 [2024-12-05 20:03:26.131368] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.881 [2024-12-05 20:03:26.134454] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.881 [2024-12-05 20:03:26.134550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.881 [2024-12-05 20:03:26.134635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.881 [2024-12-05 20:03:26.134695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:24.881 { 00:10:24.881 "results": [ 00:10:24.881 { 00:10:24.881 "job": "raid_bdev1", 00:10:24.881 "core_mask": "0x1", 00:10:24.881 "workload": "randrw", 00:10:24.881 "percentage": 50, 00:10:24.881 "status": "finished", 00:10:24.881 "queue_depth": 1, 00:10:24.881 "io_size": 131072, 00:10:24.881 "runtime": 1.432453, 00:10:24.881 "iops": 20120.72996461315, 00:10:24.881 "mibps": 2515.091245576644, 00:10:24.881 "io_failed": 0, 00:10:24.881 "io_timeout": 0, 00:10:24.881 "avg_latency_us": 46.92286356946522, 00:10:24.881 "min_latency_us": 23.36419213973799, 00:10:24.881 "max_latency_us": 1738.564192139738 00:10:24.881 } 00:10:24.881 ], 00:10:24.881 "core_count": 1 00:10:24.881 } 00:10:24.881 20:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.881 20:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63796 00:10:24.881 20:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63796 ']' 00:10:24.881 20:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63796 00:10:24.881 20:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:24.881 20:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.881 20:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63796 00:10:24.881 20:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.881 20:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.881 killing process with pid 63796 00:10:24.881 20:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63796' 00:10:24.881 20:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63796 00:10:24.881 [2024-12-05 20:03:26.182852] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.881 20:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63796 00:10:25.140 [2024-12-05 20:03:26.319100] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:26.517 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Iwh6mdmINt 00:10:26.517 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:26.517 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:26.517 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:26.517 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:26.517 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:26.517 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:26.517 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:26.517 ************************************ 00:10:26.517 END TEST raid_write_error_test 00:10:26.517 ************************************ 00:10:26.517 00:10:26.517 real 0m4.503s 00:10:26.517 user 0m5.466s 00:10:26.517 sys 0m0.542s 00:10:26.517 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.517 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.517 20:03:27 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:26.517 20:03:27 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:26.517 20:03:27 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:10:26.517 20:03:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:26.517 20:03:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.517 20:03:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:26.517 ************************************ 00:10:26.517 START TEST raid_state_function_test 00:10:26.517 ************************************ 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63935 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63935' 00:10:26.517 Process raid pid: 63935 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63935 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63935 ']' 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.517 20:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.517 [2024-12-05 20:03:27.707997] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:10:26.517 [2024-12-05 20:03:27.708133] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.517 [2024-12-05 20:03:27.883125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.777 [2024-12-05 20:03:27.996813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.777 [2024-12-05 20:03:28.205048] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.777 [2024-12-05 20:03:28.205081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.404 [2024-12-05 20:03:28.561530] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:27.404 [2024-12-05 20:03:28.561587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:27.404 [2024-12-05 20:03:28.561598] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.404 [2024-12-05 20:03:28.561623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.404 [2024-12-05 20:03:28.561630] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:27.404 [2024-12-05 20:03:28.561639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.404 "name": "Existed_Raid", 00:10:27.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.404 "strip_size_kb": 64, 00:10:27.404 "state": "configuring", 00:10:27.404 "raid_level": "raid0", 00:10:27.404 "superblock": false, 00:10:27.404 "num_base_bdevs": 3, 00:10:27.404 "num_base_bdevs_discovered": 0, 00:10:27.404 "num_base_bdevs_operational": 3, 00:10:27.404 "base_bdevs_list": [ 00:10:27.404 { 00:10:27.404 "name": "BaseBdev1", 00:10:27.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.404 "is_configured": false, 00:10:27.404 "data_offset": 0, 00:10:27.404 "data_size": 0 00:10:27.404 }, 00:10:27.404 { 00:10:27.404 "name": "BaseBdev2", 00:10:27.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.404 "is_configured": false, 00:10:27.404 "data_offset": 0, 00:10:27.404 "data_size": 0 00:10:27.404 }, 00:10:27.404 { 00:10:27.404 "name": "BaseBdev3", 00:10:27.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.404 "is_configured": false, 00:10:27.404 "data_offset": 0, 00:10:27.404 "data_size": 0 00:10:27.404 } 00:10:27.404 ] 00:10:27.404 }' 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.404 20:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.663 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:27.663 20:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.663 20:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.663 [2024-12-05 20:03:28.980767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:27.663 [2024-12-05 20:03:28.980865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:27.663 20:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.663 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:27.663 20:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.663 20:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.663 [2024-12-05 20:03:28.992758] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:27.663 [2024-12-05 20:03:28.992857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:27.663 [2024-12-05 20:03:28.992905] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.663 [2024-12-05 20:03:28.992936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.663 [2024-12-05 20:03:28.992958] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:27.663 [2024-12-05 20:03:28.992984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:27.663 20:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.663 20:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:27.663 20:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.663 20:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.663 [2024-12-05 20:03:29.040785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.663 BaseBdev1 00:10:27.663 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.663 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:27.663 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:27.663 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.663 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:27.663 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.663 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.664 [ 00:10:27.664 { 00:10:27.664 "name": "BaseBdev1", 00:10:27.664 "aliases": [ 00:10:27.664 "f920d50a-549d-4828-9f35-d43a6be6c5ae" 00:10:27.664 ], 00:10:27.664 "product_name": "Malloc disk", 00:10:27.664 "block_size": 512, 00:10:27.664 "num_blocks": 65536, 00:10:27.664 "uuid": "f920d50a-549d-4828-9f35-d43a6be6c5ae", 00:10:27.664 "assigned_rate_limits": { 00:10:27.664 "rw_ios_per_sec": 0, 00:10:27.664 "rw_mbytes_per_sec": 0, 00:10:27.664 "r_mbytes_per_sec": 0, 00:10:27.664 "w_mbytes_per_sec": 0 00:10:27.664 }, 00:10:27.664 "claimed": true, 00:10:27.664 "claim_type": "exclusive_write", 00:10:27.664 "zoned": false, 00:10:27.664 "supported_io_types": { 00:10:27.664 "read": true, 00:10:27.664 "write": true, 00:10:27.664 "unmap": true, 00:10:27.664 "flush": true, 00:10:27.664 "reset": true, 00:10:27.664 "nvme_admin": false, 00:10:27.664 "nvme_io": false, 00:10:27.664 "nvme_io_md": false, 00:10:27.664 "write_zeroes": true, 00:10:27.664 "zcopy": true, 00:10:27.664 "get_zone_info": false, 00:10:27.664 "zone_management": false, 00:10:27.664 "zone_append": false, 00:10:27.664 "compare": false, 00:10:27.664 "compare_and_write": false, 00:10:27.664 "abort": true, 00:10:27.664 "seek_hole": false, 00:10:27.664 "seek_data": false, 00:10:27.664 "copy": true, 00:10:27.664 "nvme_iov_md": false 00:10:27.664 }, 00:10:27.664 "memory_domains": [ 00:10:27.664 { 00:10:27.664 "dma_device_id": "system", 00:10:27.664 "dma_device_type": 1 00:10:27.664 }, 00:10:27.664 { 00:10:27.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.664 "dma_device_type": 2 00:10:27.664 } 00:10:27.664 ], 00:10:27.664 "driver_specific": {} 00:10:27.664 } 00:10:27.664 ] 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.664 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.922 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.922 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.922 "name": "Existed_Raid", 00:10:27.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.922 "strip_size_kb": 64, 00:10:27.922 "state": "configuring", 00:10:27.922 "raid_level": "raid0", 00:10:27.922 "superblock": false, 00:10:27.922 "num_base_bdevs": 3, 00:10:27.922 "num_base_bdevs_discovered": 1, 00:10:27.922 "num_base_bdevs_operational": 3, 00:10:27.922 "base_bdevs_list": [ 00:10:27.922 { 00:10:27.922 "name": "BaseBdev1", 00:10:27.922 "uuid": "f920d50a-549d-4828-9f35-d43a6be6c5ae", 00:10:27.922 "is_configured": true, 00:10:27.922 "data_offset": 0, 00:10:27.922 "data_size": 65536 00:10:27.922 }, 00:10:27.922 { 00:10:27.922 "name": "BaseBdev2", 00:10:27.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.922 "is_configured": false, 00:10:27.922 "data_offset": 0, 00:10:27.922 "data_size": 0 00:10:27.922 }, 00:10:27.922 { 00:10:27.922 "name": "BaseBdev3", 00:10:27.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.922 "is_configured": false, 00:10:27.922 "data_offset": 0, 00:10:27.922 "data_size": 0 00:10:27.922 } 00:10:27.922 ] 00:10:27.922 }' 00:10:27.922 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.922 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.181 [2024-12-05 20:03:29.524031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:28.181 [2024-12-05 20:03:29.524144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.181 [2024-12-05 20:03:29.536081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.181 [2024-12-05 20:03:29.538087] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:28.181 [2024-12-05 20:03:29.538169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:28.181 [2024-12-05 20:03:29.538203] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:28.181 [2024-12-05 20:03:29.538227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.181 "name": "Existed_Raid", 00:10:28.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.181 "strip_size_kb": 64, 00:10:28.181 "state": "configuring", 00:10:28.181 "raid_level": "raid0", 00:10:28.181 "superblock": false, 00:10:28.181 "num_base_bdevs": 3, 00:10:28.181 "num_base_bdevs_discovered": 1, 00:10:28.181 "num_base_bdevs_operational": 3, 00:10:28.181 "base_bdevs_list": [ 00:10:28.181 { 00:10:28.181 "name": "BaseBdev1", 00:10:28.181 "uuid": "f920d50a-549d-4828-9f35-d43a6be6c5ae", 00:10:28.181 "is_configured": true, 00:10:28.181 "data_offset": 0, 00:10:28.181 "data_size": 65536 00:10:28.181 }, 00:10:28.181 { 00:10:28.181 "name": "BaseBdev2", 00:10:28.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.181 "is_configured": false, 00:10:28.181 "data_offset": 0, 00:10:28.181 "data_size": 0 00:10:28.181 }, 00:10:28.181 { 00:10:28.181 "name": "BaseBdev3", 00:10:28.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.181 "is_configured": false, 00:10:28.181 "data_offset": 0, 00:10:28.181 "data_size": 0 00:10:28.181 } 00:10:28.181 ] 00:10:28.181 }' 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.181 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.750 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:28.750 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.750 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.750 [2024-12-05 20:03:30.031165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.750 BaseBdev2 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.750 [ 00:10:28.750 { 00:10:28.750 "name": "BaseBdev2", 00:10:28.750 "aliases": [ 00:10:28.750 "b7f7df3b-99b9-4a2e-90c4-2d2190a76fd4" 00:10:28.750 ], 00:10:28.750 "product_name": "Malloc disk", 00:10:28.750 "block_size": 512, 00:10:28.750 "num_blocks": 65536, 00:10:28.750 "uuid": "b7f7df3b-99b9-4a2e-90c4-2d2190a76fd4", 00:10:28.750 "assigned_rate_limits": { 00:10:28.750 "rw_ios_per_sec": 0, 00:10:28.750 "rw_mbytes_per_sec": 0, 00:10:28.750 "r_mbytes_per_sec": 0, 00:10:28.750 "w_mbytes_per_sec": 0 00:10:28.750 }, 00:10:28.750 "claimed": true, 00:10:28.750 "claim_type": "exclusive_write", 00:10:28.750 "zoned": false, 00:10:28.750 "supported_io_types": { 00:10:28.750 "read": true, 00:10:28.750 "write": true, 00:10:28.750 "unmap": true, 00:10:28.750 "flush": true, 00:10:28.750 "reset": true, 00:10:28.750 "nvme_admin": false, 00:10:28.750 "nvme_io": false, 00:10:28.750 "nvme_io_md": false, 00:10:28.750 "write_zeroes": true, 00:10:28.750 "zcopy": true, 00:10:28.750 "get_zone_info": false, 00:10:28.750 "zone_management": false, 00:10:28.750 "zone_append": false, 00:10:28.750 "compare": false, 00:10:28.750 "compare_and_write": false, 00:10:28.750 "abort": true, 00:10:28.750 "seek_hole": false, 00:10:28.750 "seek_data": false, 00:10:28.750 "copy": true, 00:10:28.750 "nvme_iov_md": false 00:10:28.750 }, 00:10:28.750 "memory_domains": [ 00:10:28.750 { 00:10:28.750 "dma_device_id": "system", 00:10:28.750 "dma_device_type": 1 00:10:28.750 }, 00:10:28.750 { 00:10:28.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.750 "dma_device_type": 2 00:10:28.750 } 00:10:28.750 ], 00:10:28.750 "driver_specific": {} 00:10:28.750 } 00:10:28.750 ] 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.750 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.751 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.751 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.751 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.751 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.751 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.751 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.751 "name": "Existed_Raid", 00:10:28.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.751 "strip_size_kb": 64, 00:10:28.751 "state": "configuring", 00:10:28.751 "raid_level": "raid0", 00:10:28.751 "superblock": false, 00:10:28.751 "num_base_bdevs": 3, 00:10:28.751 "num_base_bdevs_discovered": 2, 00:10:28.751 "num_base_bdevs_operational": 3, 00:10:28.751 "base_bdevs_list": [ 00:10:28.751 { 00:10:28.751 "name": "BaseBdev1", 00:10:28.751 "uuid": "f920d50a-549d-4828-9f35-d43a6be6c5ae", 00:10:28.751 "is_configured": true, 00:10:28.751 "data_offset": 0, 00:10:28.751 "data_size": 65536 00:10:28.751 }, 00:10:28.751 { 00:10:28.751 "name": "BaseBdev2", 00:10:28.751 "uuid": "b7f7df3b-99b9-4a2e-90c4-2d2190a76fd4", 00:10:28.751 "is_configured": true, 00:10:28.751 "data_offset": 0, 00:10:28.751 "data_size": 65536 00:10:28.751 }, 00:10:28.751 { 00:10:28.751 "name": "BaseBdev3", 00:10:28.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.751 "is_configured": false, 00:10:28.751 "data_offset": 0, 00:10:28.751 "data_size": 0 00:10:28.751 } 00:10:28.751 ] 00:10:28.751 }' 00:10:28.751 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.751 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.317 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:29.317 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.318 [2024-12-05 20:03:30.581218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.318 [2024-12-05 20:03:30.581260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:29.318 [2024-12-05 20:03:30.581274] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:29.318 [2024-12-05 20:03:30.581547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:29.318 [2024-12-05 20:03:30.581716] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:29.318 [2024-12-05 20:03:30.581726] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:29.318 [2024-12-05 20:03:30.581991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.318 BaseBdev3 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.318 [ 00:10:29.318 { 00:10:29.318 "name": "BaseBdev3", 00:10:29.318 "aliases": [ 00:10:29.318 "bcf5f9da-8fb8-40ca-9301-72570cd8f330" 00:10:29.318 ], 00:10:29.318 "product_name": "Malloc disk", 00:10:29.318 "block_size": 512, 00:10:29.318 "num_blocks": 65536, 00:10:29.318 "uuid": "bcf5f9da-8fb8-40ca-9301-72570cd8f330", 00:10:29.318 "assigned_rate_limits": { 00:10:29.318 "rw_ios_per_sec": 0, 00:10:29.318 "rw_mbytes_per_sec": 0, 00:10:29.318 "r_mbytes_per_sec": 0, 00:10:29.318 "w_mbytes_per_sec": 0 00:10:29.318 }, 00:10:29.318 "claimed": true, 00:10:29.318 "claim_type": "exclusive_write", 00:10:29.318 "zoned": false, 00:10:29.318 "supported_io_types": { 00:10:29.318 "read": true, 00:10:29.318 "write": true, 00:10:29.318 "unmap": true, 00:10:29.318 "flush": true, 00:10:29.318 "reset": true, 00:10:29.318 "nvme_admin": false, 00:10:29.318 "nvme_io": false, 00:10:29.318 "nvme_io_md": false, 00:10:29.318 "write_zeroes": true, 00:10:29.318 "zcopy": true, 00:10:29.318 "get_zone_info": false, 00:10:29.318 "zone_management": false, 00:10:29.318 "zone_append": false, 00:10:29.318 "compare": false, 00:10:29.318 "compare_and_write": false, 00:10:29.318 "abort": true, 00:10:29.318 "seek_hole": false, 00:10:29.318 "seek_data": false, 00:10:29.318 "copy": true, 00:10:29.318 "nvme_iov_md": false 00:10:29.318 }, 00:10:29.318 "memory_domains": [ 00:10:29.318 { 00:10:29.318 "dma_device_id": "system", 00:10:29.318 "dma_device_type": 1 00:10:29.318 }, 00:10:29.318 { 00:10:29.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.318 "dma_device_type": 2 00:10:29.318 } 00:10:29.318 ], 00:10:29.318 "driver_specific": {} 00:10:29.318 } 00:10:29.318 ] 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.318 "name": "Existed_Raid", 00:10:29.318 "uuid": "079bc9e0-2319-4e99-ac4e-ba4c88631fe9", 00:10:29.318 "strip_size_kb": 64, 00:10:29.318 "state": "online", 00:10:29.318 "raid_level": "raid0", 00:10:29.318 "superblock": false, 00:10:29.318 "num_base_bdevs": 3, 00:10:29.318 "num_base_bdevs_discovered": 3, 00:10:29.318 "num_base_bdevs_operational": 3, 00:10:29.318 "base_bdevs_list": [ 00:10:29.318 { 00:10:29.318 "name": "BaseBdev1", 00:10:29.318 "uuid": "f920d50a-549d-4828-9f35-d43a6be6c5ae", 00:10:29.318 "is_configured": true, 00:10:29.318 "data_offset": 0, 00:10:29.318 "data_size": 65536 00:10:29.318 }, 00:10:29.318 { 00:10:29.318 "name": "BaseBdev2", 00:10:29.318 "uuid": "b7f7df3b-99b9-4a2e-90c4-2d2190a76fd4", 00:10:29.318 "is_configured": true, 00:10:29.318 "data_offset": 0, 00:10:29.318 "data_size": 65536 00:10:29.318 }, 00:10:29.318 { 00:10:29.318 "name": "BaseBdev3", 00:10:29.318 "uuid": "bcf5f9da-8fb8-40ca-9301-72570cd8f330", 00:10:29.318 "is_configured": true, 00:10:29.318 "data_offset": 0, 00:10:29.318 "data_size": 65536 00:10:29.318 } 00:10:29.318 ] 00:10:29.318 }' 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.318 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.884 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:29.884 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:29.884 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:29.884 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:29.884 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:29.884 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:29.884 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:29.884 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.884 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.884 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:29.884 [2024-12-05 20:03:31.080786] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.884 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.884 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:29.884 "name": "Existed_Raid", 00:10:29.884 "aliases": [ 00:10:29.884 "079bc9e0-2319-4e99-ac4e-ba4c88631fe9" 00:10:29.884 ], 00:10:29.884 "product_name": "Raid Volume", 00:10:29.884 "block_size": 512, 00:10:29.884 "num_blocks": 196608, 00:10:29.884 "uuid": "079bc9e0-2319-4e99-ac4e-ba4c88631fe9", 00:10:29.884 "assigned_rate_limits": { 00:10:29.884 "rw_ios_per_sec": 0, 00:10:29.884 "rw_mbytes_per_sec": 0, 00:10:29.884 "r_mbytes_per_sec": 0, 00:10:29.884 "w_mbytes_per_sec": 0 00:10:29.884 }, 00:10:29.884 "claimed": false, 00:10:29.884 "zoned": false, 00:10:29.884 "supported_io_types": { 00:10:29.884 "read": true, 00:10:29.884 "write": true, 00:10:29.884 "unmap": true, 00:10:29.884 "flush": true, 00:10:29.884 "reset": true, 00:10:29.884 "nvme_admin": false, 00:10:29.884 "nvme_io": false, 00:10:29.884 "nvme_io_md": false, 00:10:29.884 "write_zeroes": true, 00:10:29.884 "zcopy": false, 00:10:29.884 "get_zone_info": false, 00:10:29.884 "zone_management": false, 00:10:29.884 "zone_append": false, 00:10:29.884 "compare": false, 00:10:29.884 "compare_and_write": false, 00:10:29.884 "abort": false, 00:10:29.884 "seek_hole": false, 00:10:29.884 "seek_data": false, 00:10:29.884 "copy": false, 00:10:29.884 "nvme_iov_md": false 00:10:29.884 }, 00:10:29.884 "memory_domains": [ 00:10:29.884 { 00:10:29.884 "dma_device_id": "system", 00:10:29.884 "dma_device_type": 1 00:10:29.884 }, 00:10:29.884 { 00:10:29.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.884 "dma_device_type": 2 00:10:29.884 }, 00:10:29.884 { 00:10:29.884 "dma_device_id": "system", 00:10:29.884 "dma_device_type": 1 00:10:29.884 }, 00:10:29.884 { 00:10:29.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.884 "dma_device_type": 2 00:10:29.884 }, 00:10:29.884 { 00:10:29.884 "dma_device_id": "system", 00:10:29.884 "dma_device_type": 1 00:10:29.884 }, 00:10:29.884 { 00:10:29.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.884 "dma_device_type": 2 00:10:29.884 } 00:10:29.884 ], 00:10:29.884 "driver_specific": { 00:10:29.884 "raid": { 00:10:29.884 "uuid": "079bc9e0-2319-4e99-ac4e-ba4c88631fe9", 00:10:29.884 "strip_size_kb": 64, 00:10:29.884 "state": "online", 00:10:29.884 "raid_level": "raid0", 00:10:29.884 "superblock": false, 00:10:29.884 "num_base_bdevs": 3, 00:10:29.885 "num_base_bdevs_discovered": 3, 00:10:29.885 "num_base_bdevs_operational": 3, 00:10:29.885 "base_bdevs_list": [ 00:10:29.885 { 00:10:29.885 "name": "BaseBdev1", 00:10:29.885 "uuid": "f920d50a-549d-4828-9f35-d43a6be6c5ae", 00:10:29.885 "is_configured": true, 00:10:29.885 "data_offset": 0, 00:10:29.885 "data_size": 65536 00:10:29.885 }, 00:10:29.885 { 00:10:29.885 "name": "BaseBdev2", 00:10:29.885 "uuid": "b7f7df3b-99b9-4a2e-90c4-2d2190a76fd4", 00:10:29.885 "is_configured": true, 00:10:29.885 "data_offset": 0, 00:10:29.885 "data_size": 65536 00:10:29.885 }, 00:10:29.885 { 00:10:29.885 "name": "BaseBdev3", 00:10:29.885 "uuid": "bcf5f9da-8fb8-40ca-9301-72570cd8f330", 00:10:29.885 "is_configured": true, 00:10:29.885 "data_offset": 0, 00:10:29.885 "data_size": 65536 00:10:29.885 } 00:10:29.885 ] 00:10:29.885 } 00:10:29.885 } 00:10:29.885 }' 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:29.885 BaseBdev2 00:10:29.885 BaseBdev3' 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.885 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.146 [2024-12-05 20:03:31.384070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:30.146 [2024-12-05 20:03:31.384110] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.146 [2024-12-05 20:03:31.384185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.146 "name": "Existed_Raid", 00:10:30.146 "uuid": "079bc9e0-2319-4e99-ac4e-ba4c88631fe9", 00:10:30.146 "strip_size_kb": 64, 00:10:30.146 "state": "offline", 00:10:30.146 "raid_level": "raid0", 00:10:30.146 "superblock": false, 00:10:30.146 "num_base_bdevs": 3, 00:10:30.146 "num_base_bdevs_discovered": 2, 00:10:30.146 "num_base_bdevs_operational": 2, 00:10:30.146 "base_bdevs_list": [ 00:10:30.146 { 00:10:30.146 "name": null, 00:10:30.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.146 "is_configured": false, 00:10:30.146 "data_offset": 0, 00:10:30.146 "data_size": 65536 00:10:30.146 }, 00:10:30.146 { 00:10:30.146 "name": "BaseBdev2", 00:10:30.146 "uuid": "b7f7df3b-99b9-4a2e-90c4-2d2190a76fd4", 00:10:30.146 "is_configured": true, 00:10:30.146 "data_offset": 0, 00:10:30.146 "data_size": 65536 00:10:30.146 }, 00:10:30.146 { 00:10:30.146 "name": "BaseBdev3", 00:10:30.146 "uuid": "bcf5f9da-8fb8-40ca-9301-72570cd8f330", 00:10:30.146 "is_configured": true, 00:10:30.146 "data_offset": 0, 00:10:30.146 "data_size": 65536 00:10:30.146 } 00:10:30.146 ] 00:10:30.146 }' 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.146 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.719 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:30.719 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.719 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.719 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.719 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.719 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.719 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.719 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:30.719 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.719 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:30.719 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.719 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.719 [2024-12-05 20:03:31.954005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:30.719 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.719 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:30.719 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.719 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.719 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.719 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.719 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.719 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.719 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:30.719 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.719 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:30.719 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.719 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.719 [2024-12-05 20:03:32.108651] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:30.719 [2024-12-05 20:03:32.108759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.979 BaseBdev2 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.979 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.979 [ 00:10:30.979 { 00:10:30.979 "name": "BaseBdev2", 00:10:30.979 "aliases": [ 00:10:30.979 "4965d092-4681-4847-a3b0-6f2f36831ef8" 00:10:30.979 ], 00:10:30.979 "product_name": "Malloc disk", 00:10:30.979 "block_size": 512, 00:10:30.979 "num_blocks": 65536, 00:10:30.979 "uuid": "4965d092-4681-4847-a3b0-6f2f36831ef8", 00:10:30.979 "assigned_rate_limits": { 00:10:30.979 "rw_ios_per_sec": 0, 00:10:30.979 "rw_mbytes_per_sec": 0, 00:10:30.979 "r_mbytes_per_sec": 0, 00:10:30.979 "w_mbytes_per_sec": 0 00:10:30.979 }, 00:10:30.979 "claimed": false, 00:10:30.979 "zoned": false, 00:10:30.979 "supported_io_types": { 00:10:30.979 "read": true, 00:10:30.979 "write": true, 00:10:30.979 "unmap": true, 00:10:30.979 "flush": true, 00:10:30.979 "reset": true, 00:10:30.979 "nvme_admin": false, 00:10:30.979 "nvme_io": false, 00:10:30.979 "nvme_io_md": false, 00:10:30.979 "write_zeroes": true, 00:10:30.979 "zcopy": true, 00:10:30.980 "get_zone_info": false, 00:10:30.980 "zone_management": false, 00:10:30.980 "zone_append": false, 00:10:30.980 "compare": false, 00:10:30.980 "compare_and_write": false, 00:10:30.980 "abort": true, 00:10:30.980 "seek_hole": false, 00:10:30.980 "seek_data": false, 00:10:30.980 "copy": true, 00:10:30.980 "nvme_iov_md": false 00:10:30.980 }, 00:10:30.980 "memory_domains": [ 00:10:30.980 { 00:10:30.980 "dma_device_id": "system", 00:10:30.980 "dma_device_type": 1 00:10:30.980 }, 00:10:30.980 { 00:10:30.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.980 "dma_device_type": 2 00:10:30.980 } 00:10:30.980 ], 00:10:30.980 "driver_specific": {} 00:10:30.980 } 00:10:30.980 ] 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.980 BaseBdev3 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.980 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.980 [ 00:10:30.980 { 00:10:30.980 "name": "BaseBdev3", 00:10:30.980 "aliases": [ 00:10:30.980 "d381e515-755a-47b5-846f-61f2a3637194" 00:10:30.980 ], 00:10:30.980 "product_name": "Malloc disk", 00:10:30.980 "block_size": 512, 00:10:30.980 "num_blocks": 65536, 00:10:30.980 "uuid": "d381e515-755a-47b5-846f-61f2a3637194", 00:10:30.980 "assigned_rate_limits": { 00:10:30.980 "rw_ios_per_sec": 0, 00:10:30.980 "rw_mbytes_per_sec": 0, 00:10:30.980 "r_mbytes_per_sec": 0, 00:10:30.980 "w_mbytes_per_sec": 0 00:10:30.980 }, 00:10:30.980 "claimed": false, 00:10:30.980 "zoned": false, 00:10:30.980 "supported_io_types": { 00:10:30.980 "read": true, 00:10:30.980 "write": true, 00:10:30.980 "unmap": true, 00:10:30.980 "flush": true, 00:10:30.980 "reset": true, 00:10:30.980 "nvme_admin": false, 00:10:30.980 "nvme_io": false, 00:10:30.980 "nvme_io_md": false, 00:10:30.980 "write_zeroes": true, 00:10:30.980 "zcopy": true, 00:10:30.980 "get_zone_info": false, 00:10:30.980 "zone_management": false, 00:10:30.980 "zone_append": false, 00:10:30.980 "compare": false, 00:10:30.980 "compare_and_write": false, 00:10:30.980 "abort": true, 00:10:30.980 "seek_hole": false, 00:10:30.980 "seek_data": false, 00:10:30.980 "copy": true, 00:10:30.980 "nvme_iov_md": false 00:10:30.980 }, 00:10:30.980 "memory_domains": [ 00:10:30.980 { 00:10:30.980 "dma_device_id": "system", 00:10:30.980 "dma_device_type": 1 00:10:30.980 }, 00:10:30.980 { 00:10:30.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.980 "dma_device_type": 2 00:10:30.980 } 00:10:30.980 ], 00:10:30.980 "driver_specific": {} 00:10:30.980 } 00:10:30.980 ] 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.240 [2024-12-05 20:03:32.420268] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.240 [2024-12-05 20:03:32.420361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.240 [2024-12-05 20:03:32.420416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.240 [2024-12-05 20:03:32.422368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.240 "name": "Existed_Raid", 00:10:31.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.240 "strip_size_kb": 64, 00:10:31.240 "state": "configuring", 00:10:31.240 "raid_level": "raid0", 00:10:31.240 "superblock": false, 00:10:31.240 "num_base_bdevs": 3, 00:10:31.240 "num_base_bdevs_discovered": 2, 00:10:31.240 "num_base_bdevs_operational": 3, 00:10:31.240 "base_bdevs_list": [ 00:10:31.240 { 00:10:31.240 "name": "BaseBdev1", 00:10:31.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.240 "is_configured": false, 00:10:31.240 "data_offset": 0, 00:10:31.240 "data_size": 0 00:10:31.240 }, 00:10:31.240 { 00:10:31.240 "name": "BaseBdev2", 00:10:31.240 "uuid": "4965d092-4681-4847-a3b0-6f2f36831ef8", 00:10:31.240 "is_configured": true, 00:10:31.240 "data_offset": 0, 00:10:31.240 "data_size": 65536 00:10:31.240 }, 00:10:31.240 { 00:10:31.240 "name": "BaseBdev3", 00:10:31.240 "uuid": "d381e515-755a-47b5-846f-61f2a3637194", 00:10:31.240 "is_configured": true, 00:10:31.240 "data_offset": 0, 00:10:31.240 "data_size": 65536 00:10:31.240 } 00:10:31.240 ] 00:10:31.240 }' 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.240 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.500 [2024-12-05 20:03:32.843610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.500 "name": "Existed_Raid", 00:10:31.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.500 "strip_size_kb": 64, 00:10:31.500 "state": "configuring", 00:10:31.500 "raid_level": "raid0", 00:10:31.500 "superblock": false, 00:10:31.500 "num_base_bdevs": 3, 00:10:31.500 "num_base_bdevs_discovered": 1, 00:10:31.500 "num_base_bdevs_operational": 3, 00:10:31.500 "base_bdevs_list": [ 00:10:31.500 { 00:10:31.500 "name": "BaseBdev1", 00:10:31.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.500 "is_configured": false, 00:10:31.500 "data_offset": 0, 00:10:31.500 "data_size": 0 00:10:31.500 }, 00:10:31.500 { 00:10:31.500 "name": null, 00:10:31.500 "uuid": "4965d092-4681-4847-a3b0-6f2f36831ef8", 00:10:31.500 "is_configured": false, 00:10:31.500 "data_offset": 0, 00:10:31.500 "data_size": 65536 00:10:31.500 }, 00:10:31.500 { 00:10:31.500 "name": "BaseBdev3", 00:10:31.500 "uuid": "d381e515-755a-47b5-846f-61f2a3637194", 00:10:31.500 "is_configured": true, 00:10:31.500 "data_offset": 0, 00:10:31.500 "data_size": 65536 00:10:31.500 } 00:10:31.500 ] 00:10:31.500 }' 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.500 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.070 [2024-12-05 20:03:33.358404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.070 BaseBdev1 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.070 [ 00:10:32.070 { 00:10:32.070 "name": "BaseBdev1", 00:10:32.070 "aliases": [ 00:10:32.070 "3adefe79-0c9d-41fb-9d9f-9a9824beb3fa" 00:10:32.070 ], 00:10:32.070 "product_name": "Malloc disk", 00:10:32.070 "block_size": 512, 00:10:32.070 "num_blocks": 65536, 00:10:32.070 "uuid": "3adefe79-0c9d-41fb-9d9f-9a9824beb3fa", 00:10:32.070 "assigned_rate_limits": { 00:10:32.070 "rw_ios_per_sec": 0, 00:10:32.070 "rw_mbytes_per_sec": 0, 00:10:32.070 "r_mbytes_per_sec": 0, 00:10:32.070 "w_mbytes_per_sec": 0 00:10:32.070 }, 00:10:32.070 "claimed": true, 00:10:32.070 "claim_type": "exclusive_write", 00:10:32.070 "zoned": false, 00:10:32.070 "supported_io_types": { 00:10:32.070 "read": true, 00:10:32.070 "write": true, 00:10:32.070 "unmap": true, 00:10:32.070 "flush": true, 00:10:32.070 "reset": true, 00:10:32.070 "nvme_admin": false, 00:10:32.070 "nvme_io": false, 00:10:32.070 "nvme_io_md": false, 00:10:32.070 "write_zeroes": true, 00:10:32.070 "zcopy": true, 00:10:32.070 "get_zone_info": false, 00:10:32.070 "zone_management": false, 00:10:32.070 "zone_append": false, 00:10:32.070 "compare": false, 00:10:32.070 "compare_and_write": false, 00:10:32.070 "abort": true, 00:10:32.070 "seek_hole": false, 00:10:32.070 "seek_data": false, 00:10:32.070 "copy": true, 00:10:32.070 "nvme_iov_md": false 00:10:32.070 }, 00:10:32.070 "memory_domains": [ 00:10:32.070 { 00:10:32.070 "dma_device_id": "system", 00:10:32.070 "dma_device_type": 1 00:10:32.070 }, 00:10:32.070 { 00:10:32.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.070 "dma_device_type": 2 00:10:32.070 } 00:10:32.070 ], 00:10:32.070 "driver_specific": {} 00:10:32.070 } 00:10:32.070 ] 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.070 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.071 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.071 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.071 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.071 "name": "Existed_Raid", 00:10:32.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.071 "strip_size_kb": 64, 00:10:32.071 "state": "configuring", 00:10:32.071 "raid_level": "raid0", 00:10:32.071 "superblock": false, 00:10:32.071 "num_base_bdevs": 3, 00:10:32.071 "num_base_bdevs_discovered": 2, 00:10:32.071 "num_base_bdevs_operational": 3, 00:10:32.071 "base_bdevs_list": [ 00:10:32.071 { 00:10:32.071 "name": "BaseBdev1", 00:10:32.071 "uuid": "3adefe79-0c9d-41fb-9d9f-9a9824beb3fa", 00:10:32.071 "is_configured": true, 00:10:32.071 "data_offset": 0, 00:10:32.071 "data_size": 65536 00:10:32.071 }, 00:10:32.071 { 00:10:32.071 "name": null, 00:10:32.071 "uuid": "4965d092-4681-4847-a3b0-6f2f36831ef8", 00:10:32.071 "is_configured": false, 00:10:32.071 "data_offset": 0, 00:10:32.071 "data_size": 65536 00:10:32.071 }, 00:10:32.071 { 00:10:32.071 "name": "BaseBdev3", 00:10:32.071 "uuid": "d381e515-755a-47b5-846f-61f2a3637194", 00:10:32.071 "is_configured": true, 00:10:32.071 "data_offset": 0, 00:10:32.071 "data_size": 65536 00:10:32.071 } 00:10:32.071 ] 00:10:32.071 }' 00:10:32.071 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.071 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.639 [2024-12-05 20:03:33.849627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.639 "name": "Existed_Raid", 00:10:32.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.639 "strip_size_kb": 64, 00:10:32.639 "state": "configuring", 00:10:32.639 "raid_level": "raid0", 00:10:32.639 "superblock": false, 00:10:32.639 "num_base_bdevs": 3, 00:10:32.639 "num_base_bdevs_discovered": 1, 00:10:32.639 "num_base_bdevs_operational": 3, 00:10:32.639 "base_bdevs_list": [ 00:10:32.639 { 00:10:32.639 "name": "BaseBdev1", 00:10:32.639 "uuid": "3adefe79-0c9d-41fb-9d9f-9a9824beb3fa", 00:10:32.639 "is_configured": true, 00:10:32.639 "data_offset": 0, 00:10:32.639 "data_size": 65536 00:10:32.639 }, 00:10:32.639 { 00:10:32.639 "name": null, 00:10:32.639 "uuid": "4965d092-4681-4847-a3b0-6f2f36831ef8", 00:10:32.639 "is_configured": false, 00:10:32.639 "data_offset": 0, 00:10:32.639 "data_size": 65536 00:10:32.639 }, 00:10:32.639 { 00:10:32.639 "name": null, 00:10:32.639 "uuid": "d381e515-755a-47b5-846f-61f2a3637194", 00:10:32.639 "is_configured": false, 00:10:32.639 "data_offset": 0, 00:10:32.639 "data_size": 65536 00:10:32.639 } 00:10:32.639 ] 00:10:32.639 }' 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.639 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.899 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.899 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.899 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.899 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.158 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.158 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.159 [2024-12-05 20:03:34.364795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.159 "name": "Existed_Raid", 00:10:33.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.159 "strip_size_kb": 64, 00:10:33.159 "state": "configuring", 00:10:33.159 "raid_level": "raid0", 00:10:33.159 "superblock": false, 00:10:33.159 "num_base_bdevs": 3, 00:10:33.159 "num_base_bdevs_discovered": 2, 00:10:33.159 "num_base_bdevs_operational": 3, 00:10:33.159 "base_bdevs_list": [ 00:10:33.159 { 00:10:33.159 "name": "BaseBdev1", 00:10:33.159 "uuid": "3adefe79-0c9d-41fb-9d9f-9a9824beb3fa", 00:10:33.159 "is_configured": true, 00:10:33.159 "data_offset": 0, 00:10:33.159 "data_size": 65536 00:10:33.159 }, 00:10:33.159 { 00:10:33.159 "name": null, 00:10:33.159 "uuid": "4965d092-4681-4847-a3b0-6f2f36831ef8", 00:10:33.159 "is_configured": false, 00:10:33.159 "data_offset": 0, 00:10:33.159 "data_size": 65536 00:10:33.159 }, 00:10:33.159 { 00:10:33.159 "name": "BaseBdev3", 00:10:33.159 "uuid": "d381e515-755a-47b5-846f-61f2a3637194", 00:10:33.159 "is_configured": true, 00:10:33.159 "data_offset": 0, 00:10:33.159 "data_size": 65536 00:10:33.159 } 00:10:33.159 ] 00:10:33.159 }' 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.159 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.419 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.419 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.419 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.419 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:33.419 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.678 [2024-12-05 20:03:34.871949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.678 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.678 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.678 "name": "Existed_Raid", 00:10:33.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.678 "strip_size_kb": 64, 00:10:33.678 "state": "configuring", 00:10:33.678 "raid_level": "raid0", 00:10:33.678 "superblock": false, 00:10:33.678 "num_base_bdevs": 3, 00:10:33.678 "num_base_bdevs_discovered": 1, 00:10:33.678 "num_base_bdevs_operational": 3, 00:10:33.678 "base_bdevs_list": [ 00:10:33.678 { 00:10:33.678 "name": null, 00:10:33.678 "uuid": "3adefe79-0c9d-41fb-9d9f-9a9824beb3fa", 00:10:33.678 "is_configured": false, 00:10:33.678 "data_offset": 0, 00:10:33.678 "data_size": 65536 00:10:33.678 }, 00:10:33.678 { 00:10:33.678 "name": null, 00:10:33.678 "uuid": "4965d092-4681-4847-a3b0-6f2f36831ef8", 00:10:33.678 "is_configured": false, 00:10:33.678 "data_offset": 0, 00:10:33.678 "data_size": 65536 00:10:33.678 }, 00:10:33.678 { 00:10:33.678 "name": "BaseBdev3", 00:10:33.678 "uuid": "d381e515-755a-47b5-846f-61f2a3637194", 00:10:33.678 "is_configured": true, 00:10:33.678 "data_offset": 0, 00:10:33.678 "data_size": 65536 00:10:33.678 } 00:10:33.678 ] 00:10:33.678 }' 00:10:33.678 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.678 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.246 [2024-12-05 20:03:35.438592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.246 "name": "Existed_Raid", 00:10:34.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.246 "strip_size_kb": 64, 00:10:34.246 "state": "configuring", 00:10:34.246 "raid_level": "raid0", 00:10:34.246 "superblock": false, 00:10:34.246 "num_base_bdevs": 3, 00:10:34.246 "num_base_bdevs_discovered": 2, 00:10:34.246 "num_base_bdevs_operational": 3, 00:10:34.246 "base_bdevs_list": [ 00:10:34.246 { 00:10:34.246 "name": null, 00:10:34.246 "uuid": "3adefe79-0c9d-41fb-9d9f-9a9824beb3fa", 00:10:34.246 "is_configured": false, 00:10:34.246 "data_offset": 0, 00:10:34.246 "data_size": 65536 00:10:34.246 }, 00:10:34.246 { 00:10:34.246 "name": "BaseBdev2", 00:10:34.246 "uuid": "4965d092-4681-4847-a3b0-6f2f36831ef8", 00:10:34.246 "is_configured": true, 00:10:34.246 "data_offset": 0, 00:10:34.246 "data_size": 65536 00:10:34.246 }, 00:10:34.246 { 00:10:34.246 "name": "BaseBdev3", 00:10:34.246 "uuid": "d381e515-755a-47b5-846f-61f2a3637194", 00:10:34.246 "is_configured": true, 00:10:34.246 "data_offset": 0, 00:10:34.246 "data_size": 65536 00:10:34.246 } 00:10:34.246 ] 00:10:34.246 }' 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.246 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.505 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:34.505 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.505 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.505 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.505 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.505 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:34.505 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.505 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:34.505 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.505 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.505 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.764 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3adefe79-0c9d-41fb-9d9f-9a9824beb3fa 00:10:34.764 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.764 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.764 [2024-12-05 20:03:35.991583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:34.764 [2024-12-05 20:03:35.991725] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:34.764 [2024-12-05 20:03:35.991753] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:34.764 [2024-12-05 20:03:35.992067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:34.764 [2024-12-05 20:03:35.992324] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:34.764 [2024-12-05 20:03:35.992372] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:34.764 [2024-12-05 20:03:35.992708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.764 NewBaseBdev 00:10:34.764 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.764 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:34.764 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:34.764 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.764 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:34.764 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.764 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.764 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.764 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.764 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.764 [ 00:10:34.764 { 00:10:34.764 "name": "NewBaseBdev", 00:10:34.764 "aliases": [ 00:10:34.764 "3adefe79-0c9d-41fb-9d9f-9a9824beb3fa" 00:10:34.764 ], 00:10:34.764 "product_name": "Malloc disk", 00:10:34.764 "block_size": 512, 00:10:34.764 "num_blocks": 65536, 00:10:34.764 "uuid": "3adefe79-0c9d-41fb-9d9f-9a9824beb3fa", 00:10:34.764 "assigned_rate_limits": { 00:10:34.764 "rw_ios_per_sec": 0, 00:10:34.764 "rw_mbytes_per_sec": 0, 00:10:34.764 "r_mbytes_per_sec": 0, 00:10:34.764 "w_mbytes_per_sec": 0 00:10:34.764 }, 00:10:34.764 "claimed": true, 00:10:34.764 "claim_type": "exclusive_write", 00:10:34.764 "zoned": false, 00:10:34.764 "supported_io_types": { 00:10:34.764 "read": true, 00:10:34.764 "write": true, 00:10:34.764 "unmap": true, 00:10:34.764 "flush": true, 00:10:34.764 "reset": true, 00:10:34.764 "nvme_admin": false, 00:10:34.764 "nvme_io": false, 00:10:34.764 "nvme_io_md": false, 00:10:34.764 "write_zeroes": true, 00:10:34.764 "zcopy": true, 00:10:34.764 "get_zone_info": false, 00:10:34.764 "zone_management": false, 00:10:34.764 "zone_append": false, 00:10:34.764 "compare": false, 00:10:34.764 "compare_and_write": false, 00:10:34.764 "abort": true, 00:10:34.764 "seek_hole": false, 00:10:34.764 "seek_data": false, 00:10:34.764 "copy": true, 00:10:34.764 "nvme_iov_md": false 00:10:34.764 }, 00:10:34.764 "memory_domains": [ 00:10:34.764 { 00:10:34.764 "dma_device_id": "system", 00:10:34.764 "dma_device_type": 1 00:10:34.764 }, 00:10:34.764 { 00:10:34.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.764 "dma_device_type": 2 00:10:34.764 } 00:10:34.764 ], 00:10:34.764 "driver_specific": {} 00:10:34.764 } 00:10:34.764 ] 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.764 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.764 "name": "Existed_Raid", 00:10:34.764 "uuid": "2c6eb4eb-0c36-4a49-b0a4-a7f930c116e3", 00:10:34.764 "strip_size_kb": 64, 00:10:34.764 "state": "online", 00:10:34.764 "raid_level": "raid0", 00:10:34.764 "superblock": false, 00:10:34.765 "num_base_bdevs": 3, 00:10:34.765 "num_base_bdevs_discovered": 3, 00:10:34.765 "num_base_bdevs_operational": 3, 00:10:34.765 "base_bdevs_list": [ 00:10:34.765 { 00:10:34.765 "name": "NewBaseBdev", 00:10:34.765 "uuid": "3adefe79-0c9d-41fb-9d9f-9a9824beb3fa", 00:10:34.765 "is_configured": true, 00:10:34.765 "data_offset": 0, 00:10:34.765 "data_size": 65536 00:10:34.765 }, 00:10:34.765 { 00:10:34.765 "name": "BaseBdev2", 00:10:34.765 "uuid": "4965d092-4681-4847-a3b0-6f2f36831ef8", 00:10:34.765 "is_configured": true, 00:10:34.765 "data_offset": 0, 00:10:34.765 "data_size": 65536 00:10:34.765 }, 00:10:34.765 { 00:10:34.765 "name": "BaseBdev3", 00:10:34.765 "uuid": "d381e515-755a-47b5-846f-61f2a3637194", 00:10:34.765 "is_configured": true, 00:10:34.765 "data_offset": 0, 00:10:34.765 "data_size": 65536 00:10:34.765 } 00:10:34.765 ] 00:10:34.765 }' 00:10:34.765 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.765 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.333 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:35.333 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:35.333 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:35.333 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:35.333 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:35.333 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:35.333 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:35.333 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:35.333 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.333 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.333 [2024-12-05 20:03:36.475196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.333 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.333 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:35.333 "name": "Existed_Raid", 00:10:35.333 "aliases": [ 00:10:35.333 "2c6eb4eb-0c36-4a49-b0a4-a7f930c116e3" 00:10:35.333 ], 00:10:35.333 "product_name": "Raid Volume", 00:10:35.333 "block_size": 512, 00:10:35.333 "num_blocks": 196608, 00:10:35.333 "uuid": "2c6eb4eb-0c36-4a49-b0a4-a7f930c116e3", 00:10:35.333 "assigned_rate_limits": { 00:10:35.333 "rw_ios_per_sec": 0, 00:10:35.333 "rw_mbytes_per_sec": 0, 00:10:35.333 "r_mbytes_per_sec": 0, 00:10:35.333 "w_mbytes_per_sec": 0 00:10:35.333 }, 00:10:35.333 "claimed": false, 00:10:35.333 "zoned": false, 00:10:35.333 "supported_io_types": { 00:10:35.333 "read": true, 00:10:35.333 "write": true, 00:10:35.333 "unmap": true, 00:10:35.333 "flush": true, 00:10:35.333 "reset": true, 00:10:35.333 "nvme_admin": false, 00:10:35.333 "nvme_io": false, 00:10:35.334 "nvme_io_md": false, 00:10:35.334 "write_zeroes": true, 00:10:35.334 "zcopy": false, 00:10:35.334 "get_zone_info": false, 00:10:35.334 "zone_management": false, 00:10:35.334 "zone_append": false, 00:10:35.334 "compare": false, 00:10:35.334 "compare_and_write": false, 00:10:35.334 "abort": false, 00:10:35.334 "seek_hole": false, 00:10:35.334 "seek_data": false, 00:10:35.334 "copy": false, 00:10:35.334 "nvme_iov_md": false 00:10:35.334 }, 00:10:35.334 "memory_domains": [ 00:10:35.334 { 00:10:35.334 "dma_device_id": "system", 00:10:35.334 "dma_device_type": 1 00:10:35.334 }, 00:10:35.334 { 00:10:35.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.334 "dma_device_type": 2 00:10:35.334 }, 00:10:35.334 { 00:10:35.334 "dma_device_id": "system", 00:10:35.334 "dma_device_type": 1 00:10:35.334 }, 00:10:35.334 { 00:10:35.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.334 "dma_device_type": 2 00:10:35.334 }, 00:10:35.334 { 00:10:35.334 "dma_device_id": "system", 00:10:35.334 "dma_device_type": 1 00:10:35.334 }, 00:10:35.334 { 00:10:35.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.334 "dma_device_type": 2 00:10:35.334 } 00:10:35.334 ], 00:10:35.334 "driver_specific": { 00:10:35.334 "raid": { 00:10:35.334 "uuid": "2c6eb4eb-0c36-4a49-b0a4-a7f930c116e3", 00:10:35.334 "strip_size_kb": 64, 00:10:35.334 "state": "online", 00:10:35.334 "raid_level": "raid0", 00:10:35.334 "superblock": false, 00:10:35.334 "num_base_bdevs": 3, 00:10:35.334 "num_base_bdevs_discovered": 3, 00:10:35.334 "num_base_bdevs_operational": 3, 00:10:35.334 "base_bdevs_list": [ 00:10:35.334 { 00:10:35.334 "name": "NewBaseBdev", 00:10:35.334 "uuid": "3adefe79-0c9d-41fb-9d9f-9a9824beb3fa", 00:10:35.334 "is_configured": true, 00:10:35.334 "data_offset": 0, 00:10:35.334 "data_size": 65536 00:10:35.334 }, 00:10:35.334 { 00:10:35.334 "name": "BaseBdev2", 00:10:35.334 "uuid": "4965d092-4681-4847-a3b0-6f2f36831ef8", 00:10:35.334 "is_configured": true, 00:10:35.334 "data_offset": 0, 00:10:35.334 "data_size": 65536 00:10:35.334 }, 00:10:35.334 { 00:10:35.334 "name": "BaseBdev3", 00:10:35.334 "uuid": "d381e515-755a-47b5-846f-61f2a3637194", 00:10:35.334 "is_configured": true, 00:10:35.334 "data_offset": 0, 00:10:35.334 "data_size": 65536 00:10:35.334 } 00:10:35.334 ] 00:10:35.334 } 00:10:35.334 } 00:10:35.334 }' 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:35.334 BaseBdev2 00:10:35.334 BaseBdev3' 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.334 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.594 [2024-12-05 20:03:36.770345] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.594 [2024-12-05 20:03:36.770427] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.594 [2024-12-05 20:03:36.770551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.594 [2024-12-05 20:03:36.770649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.594 [2024-12-05 20:03:36.770666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:35.594 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.594 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63935 00:10:35.594 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63935 ']' 00:10:35.594 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63935 00:10:35.594 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:35.594 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.594 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63935 00:10:35.594 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.594 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.594 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63935' 00:10:35.594 killing process with pid 63935 00:10:35.595 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63935 00:10:35.595 [2024-12-05 20:03:36.811185] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.595 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63935 00:10:35.854 [2024-12-05 20:03:37.125565] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:37.235 00:10:37.235 real 0m10.688s 00:10:37.235 user 0m16.984s 00:10:37.235 sys 0m1.762s 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.235 ************************************ 00:10:37.235 END TEST raid_state_function_test 00:10:37.235 ************************************ 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.235 20:03:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:37.235 20:03:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:37.235 20:03:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.235 20:03:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.235 ************************************ 00:10:37.235 START TEST raid_state_function_test_sb 00:10:37.235 ************************************ 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:37.235 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:37.236 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:37.236 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64557 00:10:37.236 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:37.236 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64557' 00:10:37.236 Process raid pid: 64557 00:10:37.236 20:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64557 00:10:37.236 20:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64557 ']' 00:10:37.236 20:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.236 20:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.236 20:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.236 20:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.236 20:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.236 [2024-12-05 20:03:38.465823] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:10:37.236 [2024-12-05 20:03:38.465942] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.236 [2024-12-05 20:03:38.628578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.496 [2024-12-05 20:03:38.746149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.761 [2024-12-05 20:03:38.950987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.761 [2024-12-05 20:03:38.951029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.046 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.046 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:38.046 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:38.046 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.046 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.046 [2024-12-05 20:03:39.316821] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.046 [2024-12-05 20:03:39.316896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.046 [2024-12-05 20:03:39.316908] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.046 [2024-12-05 20:03:39.316918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.046 [2024-12-05 20:03:39.316925] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.046 [2024-12-05 20:03:39.316934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.046 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.046 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:38.046 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.046 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.046 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.046 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.046 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.046 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.046 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.046 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.046 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.046 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.047 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.047 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.047 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.047 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.047 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.047 "name": "Existed_Raid", 00:10:38.047 "uuid": "6c774b72-a99d-4201-84ee-77ba4c27a583", 00:10:38.047 "strip_size_kb": 64, 00:10:38.047 "state": "configuring", 00:10:38.047 "raid_level": "raid0", 00:10:38.047 "superblock": true, 00:10:38.047 "num_base_bdevs": 3, 00:10:38.047 "num_base_bdevs_discovered": 0, 00:10:38.047 "num_base_bdevs_operational": 3, 00:10:38.047 "base_bdevs_list": [ 00:10:38.047 { 00:10:38.047 "name": "BaseBdev1", 00:10:38.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.047 "is_configured": false, 00:10:38.047 "data_offset": 0, 00:10:38.047 "data_size": 0 00:10:38.047 }, 00:10:38.047 { 00:10:38.047 "name": "BaseBdev2", 00:10:38.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.047 "is_configured": false, 00:10:38.047 "data_offset": 0, 00:10:38.047 "data_size": 0 00:10:38.047 }, 00:10:38.047 { 00:10:38.047 "name": "BaseBdev3", 00:10:38.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.047 "is_configured": false, 00:10:38.047 "data_offset": 0, 00:10:38.047 "data_size": 0 00:10:38.047 } 00:10:38.047 ] 00:10:38.047 }' 00:10:38.047 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.047 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.307 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.307 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.307 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.307 [2024-12-05 20:03:39.736042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.307 [2024-12-05 20:03:39.736162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:38.307 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.307 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:38.307 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.307 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.568 [2024-12-05 20:03:39.744055] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.568 [2024-12-05 20:03:39.744150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.568 [2024-12-05 20:03:39.744181] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.568 [2024-12-05 20:03:39.744224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.568 [2024-12-05 20:03:39.744246] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.568 [2024-12-05 20:03:39.744271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.568 [2024-12-05 20:03:39.787383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.568 BaseBdev1 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.568 [ 00:10:38.568 { 00:10:38.568 "name": "BaseBdev1", 00:10:38.568 "aliases": [ 00:10:38.568 "f17816fd-a31c-46ae-b704-0c42194e700a" 00:10:38.568 ], 00:10:38.568 "product_name": "Malloc disk", 00:10:38.568 "block_size": 512, 00:10:38.568 "num_blocks": 65536, 00:10:38.568 "uuid": "f17816fd-a31c-46ae-b704-0c42194e700a", 00:10:38.568 "assigned_rate_limits": { 00:10:38.568 "rw_ios_per_sec": 0, 00:10:38.568 "rw_mbytes_per_sec": 0, 00:10:38.568 "r_mbytes_per_sec": 0, 00:10:38.568 "w_mbytes_per_sec": 0 00:10:38.568 }, 00:10:38.568 "claimed": true, 00:10:38.568 "claim_type": "exclusive_write", 00:10:38.568 "zoned": false, 00:10:38.568 "supported_io_types": { 00:10:38.568 "read": true, 00:10:38.568 "write": true, 00:10:38.568 "unmap": true, 00:10:38.568 "flush": true, 00:10:38.568 "reset": true, 00:10:38.568 "nvme_admin": false, 00:10:38.568 "nvme_io": false, 00:10:38.568 "nvme_io_md": false, 00:10:38.568 "write_zeroes": true, 00:10:38.568 "zcopy": true, 00:10:38.568 "get_zone_info": false, 00:10:38.568 "zone_management": false, 00:10:38.568 "zone_append": false, 00:10:38.568 "compare": false, 00:10:38.568 "compare_and_write": false, 00:10:38.568 "abort": true, 00:10:38.568 "seek_hole": false, 00:10:38.568 "seek_data": false, 00:10:38.568 "copy": true, 00:10:38.568 "nvme_iov_md": false 00:10:38.568 }, 00:10:38.568 "memory_domains": [ 00:10:38.568 { 00:10:38.568 "dma_device_id": "system", 00:10:38.568 "dma_device_type": 1 00:10:38.568 }, 00:10:38.568 { 00:10:38.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.568 "dma_device_type": 2 00:10:38.568 } 00:10:38.568 ], 00:10:38.568 "driver_specific": {} 00:10:38.568 } 00:10:38.568 ] 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.568 "name": "Existed_Raid", 00:10:38.568 "uuid": "fddfd0b2-f4a6-4acd-9126-1d43e3af92e9", 00:10:38.568 "strip_size_kb": 64, 00:10:38.568 "state": "configuring", 00:10:38.568 "raid_level": "raid0", 00:10:38.568 "superblock": true, 00:10:38.568 "num_base_bdevs": 3, 00:10:38.568 "num_base_bdevs_discovered": 1, 00:10:38.568 "num_base_bdevs_operational": 3, 00:10:38.568 "base_bdevs_list": [ 00:10:38.568 { 00:10:38.568 "name": "BaseBdev1", 00:10:38.568 "uuid": "f17816fd-a31c-46ae-b704-0c42194e700a", 00:10:38.568 "is_configured": true, 00:10:38.568 "data_offset": 2048, 00:10:38.568 "data_size": 63488 00:10:38.568 }, 00:10:38.568 { 00:10:38.568 "name": "BaseBdev2", 00:10:38.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.568 "is_configured": false, 00:10:38.568 "data_offset": 0, 00:10:38.568 "data_size": 0 00:10:38.568 }, 00:10:38.568 { 00:10:38.568 "name": "BaseBdev3", 00:10:38.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.568 "is_configured": false, 00:10:38.568 "data_offset": 0, 00:10:38.568 "data_size": 0 00:10:38.568 } 00:10:38.568 ] 00:10:38.568 }' 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.568 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.137 [2024-12-05 20:03:40.322547] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.137 [2024-12-05 20:03:40.322606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.137 [2024-12-05 20:03:40.334558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.137 [2024-12-05 20:03:40.336383] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.137 [2024-12-05 20:03:40.336428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.137 [2024-12-05 20:03:40.336439] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:39.137 [2024-12-05 20:03:40.336448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.137 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.137 "name": "Existed_Raid", 00:10:39.137 "uuid": "32d95bc0-56e9-4a31-99ed-d362b2f3bbe4", 00:10:39.137 "strip_size_kb": 64, 00:10:39.137 "state": "configuring", 00:10:39.137 "raid_level": "raid0", 00:10:39.137 "superblock": true, 00:10:39.137 "num_base_bdevs": 3, 00:10:39.137 "num_base_bdevs_discovered": 1, 00:10:39.137 "num_base_bdevs_operational": 3, 00:10:39.137 "base_bdevs_list": [ 00:10:39.137 { 00:10:39.137 "name": "BaseBdev1", 00:10:39.137 "uuid": "f17816fd-a31c-46ae-b704-0c42194e700a", 00:10:39.137 "is_configured": true, 00:10:39.137 "data_offset": 2048, 00:10:39.137 "data_size": 63488 00:10:39.137 }, 00:10:39.138 { 00:10:39.138 "name": "BaseBdev2", 00:10:39.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.138 "is_configured": false, 00:10:39.138 "data_offset": 0, 00:10:39.138 "data_size": 0 00:10:39.138 }, 00:10:39.138 { 00:10:39.138 "name": "BaseBdev3", 00:10:39.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.138 "is_configured": false, 00:10:39.138 "data_offset": 0, 00:10:39.138 "data_size": 0 00:10:39.138 } 00:10:39.138 ] 00:10:39.138 }' 00:10:39.138 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.138 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.397 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:39.397 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.397 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.397 [2024-12-05 20:03:40.828708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.397 BaseBdev2 00:10:39.397 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.397 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:39.397 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:39.397 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.657 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:39.657 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.657 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.657 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.657 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.657 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.657 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.657 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:39.657 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.657 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.657 [ 00:10:39.657 { 00:10:39.657 "name": "BaseBdev2", 00:10:39.657 "aliases": [ 00:10:39.657 "4805f151-15b9-4206-849f-6ea9101f511f" 00:10:39.657 ], 00:10:39.657 "product_name": "Malloc disk", 00:10:39.657 "block_size": 512, 00:10:39.657 "num_blocks": 65536, 00:10:39.657 "uuid": "4805f151-15b9-4206-849f-6ea9101f511f", 00:10:39.657 "assigned_rate_limits": { 00:10:39.657 "rw_ios_per_sec": 0, 00:10:39.657 "rw_mbytes_per_sec": 0, 00:10:39.657 "r_mbytes_per_sec": 0, 00:10:39.657 "w_mbytes_per_sec": 0 00:10:39.657 }, 00:10:39.657 "claimed": true, 00:10:39.658 "claim_type": "exclusive_write", 00:10:39.658 "zoned": false, 00:10:39.658 "supported_io_types": { 00:10:39.658 "read": true, 00:10:39.658 "write": true, 00:10:39.658 "unmap": true, 00:10:39.658 "flush": true, 00:10:39.658 "reset": true, 00:10:39.658 "nvme_admin": false, 00:10:39.658 "nvme_io": false, 00:10:39.658 "nvme_io_md": false, 00:10:39.658 "write_zeroes": true, 00:10:39.658 "zcopy": true, 00:10:39.658 "get_zone_info": false, 00:10:39.658 "zone_management": false, 00:10:39.658 "zone_append": false, 00:10:39.658 "compare": false, 00:10:39.658 "compare_and_write": false, 00:10:39.658 "abort": true, 00:10:39.658 "seek_hole": false, 00:10:39.658 "seek_data": false, 00:10:39.658 "copy": true, 00:10:39.658 "nvme_iov_md": false 00:10:39.658 }, 00:10:39.658 "memory_domains": [ 00:10:39.658 { 00:10:39.658 "dma_device_id": "system", 00:10:39.658 "dma_device_type": 1 00:10:39.658 }, 00:10:39.658 { 00:10:39.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.658 "dma_device_type": 2 00:10:39.658 } 00:10:39.658 ], 00:10:39.658 "driver_specific": {} 00:10:39.658 } 00:10:39.658 ] 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.658 "name": "Existed_Raid", 00:10:39.658 "uuid": "32d95bc0-56e9-4a31-99ed-d362b2f3bbe4", 00:10:39.658 "strip_size_kb": 64, 00:10:39.658 "state": "configuring", 00:10:39.658 "raid_level": "raid0", 00:10:39.658 "superblock": true, 00:10:39.658 "num_base_bdevs": 3, 00:10:39.658 "num_base_bdevs_discovered": 2, 00:10:39.658 "num_base_bdevs_operational": 3, 00:10:39.658 "base_bdevs_list": [ 00:10:39.658 { 00:10:39.658 "name": "BaseBdev1", 00:10:39.658 "uuid": "f17816fd-a31c-46ae-b704-0c42194e700a", 00:10:39.658 "is_configured": true, 00:10:39.658 "data_offset": 2048, 00:10:39.658 "data_size": 63488 00:10:39.658 }, 00:10:39.658 { 00:10:39.658 "name": "BaseBdev2", 00:10:39.658 "uuid": "4805f151-15b9-4206-849f-6ea9101f511f", 00:10:39.658 "is_configured": true, 00:10:39.658 "data_offset": 2048, 00:10:39.658 "data_size": 63488 00:10:39.658 }, 00:10:39.658 { 00:10:39.658 "name": "BaseBdev3", 00:10:39.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.658 "is_configured": false, 00:10:39.658 "data_offset": 0, 00:10:39.658 "data_size": 0 00:10:39.658 } 00:10:39.658 ] 00:10:39.658 }' 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.658 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.918 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:39.918 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.918 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.179 [2024-12-05 20:03:41.391010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.179 [2024-12-05 20:03:41.391417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:40.179 [2024-12-05 20:03:41.391444] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:40.179 [2024-12-05 20:03:41.391726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:40.179 [2024-12-05 20:03:41.391890] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:40.179 [2024-12-05 20:03:41.391915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:40.179 BaseBdev3 00:10:40.179 [2024-12-05 20:03:41.392092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.179 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.179 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:40.179 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:40.179 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.179 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.179 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.179 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.179 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.179 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.179 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.179 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.179 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:40.179 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.179 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.179 [ 00:10:40.179 { 00:10:40.179 "name": "BaseBdev3", 00:10:40.179 "aliases": [ 00:10:40.179 "735df7e7-1f14-4e56-b607-e8e287b39b4c" 00:10:40.179 ], 00:10:40.179 "product_name": "Malloc disk", 00:10:40.179 "block_size": 512, 00:10:40.179 "num_blocks": 65536, 00:10:40.179 "uuid": "735df7e7-1f14-4e56-b607-e8e287b39b4c", 00:10:40.179 "assigned_rate_limits": { 00:10:40.179 "rw_ios_per_sec": 0, 00:10:40.179 "rw_mbytes_per_sec": 0, 00:10:40.179 "r_mbytes_per_sec": 0, 00:10:40.179 "w_mbytes_per_sec": 0 00:10:40.179 }, 00:10:40.179 "claimed": true, 00:10:40.179 "claim_type": "exclusive_write", 00:10:40.179 "zoned": false, 00:10:40.179 "supported_io_types": { 00:10:40.179 "read": true, 00:10:40.179 "write": true, 00:10:40.179 "unmap": true, 00:10:40.179 "flush": true, 00:10:40.179 "reset": true, 00:10:40.179 "nvme_admin": false, 00:10:40.179 "nvme_io": false, 00:10:40.179 "nvme_io_md": false, 00:10:40.179 "write_zeroes": true, 00:10:40.179 "zcopy": true, 00:10:40.179 "get_zone_info": false, 00:10:40.179 "zone_management": false, 00:10:40.179 "zone_append": false, 00:10:40.179 "compare": false, 00:10:40.180 "compare_and_write": false, 00:10:40.180 "abort": true, 00:10:40.180 "seek_hole": false, 00:10:40.180 "seek_data": false, 00:10:40.180 "copy": true, 00:10:40.180 "nvme_iov_md": false 00:10:40.180 }, 00:10:40.180 "memory_domains": [ 00:10:40.180 { 00:10:40.180 "dma_device_id": "system", 00:10:40.180 "dma_device_type": 1 00:10:40.180 }, 00:10:40.180 { 00:10:40.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.180 "dma_device_type": 2 00:10:40.180 } 00:10:40.180 ], 00:10:40.180 "driver_specific": {} 00:10:40.180 } 00:10:40.180 ] 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.180 "name": "Existed_Raid", 00:10:40.180 "uuid": "32d95bc0-56e9-4a31-99ed-d362b2f3bbe4", 00:10:40.180 "strip_size_kb": 64, 00:10:40.180 "state": "online", 00:10:40.180 "raid_level": "raid0", 00:10:40.180 "superblock": true, 00:10:40.180 "num_base_bdevs": 3, 00:10:40.180 "num_base_bdevs_discovered": 3, 00:10:40.180 "num_base_bdevs_operational": 3, 00:10:40.180 "base_bdevs_list": [ 00:10:40.180 { 00:10:40.180 "name": "BaseBdev1", 00:10:40.180 "uuid": "f17816fd-a31c-46ae-b704-0c42194e700a", 00:10:40.180 "is_configured": true, 00:10:40.180 "data_offset": 2048, 00:10:40.180 "data_size": 63488 00:10:40.180 }, 00:10:40.180 { 00:10:40.180 "name": "BaseBdev2", 00:10:40.180 "uuid": "4805f151-15b9-4206-849f-6ea9101f511f", 00:10:40.180 "is_configured": true, 00:10:40.180 "data_offset": 2048, 00:10:40.180 "data_size": 63488 00:10:40.180 }, 00:10:40.180 { 00:10:40.180 "name": "BaseBdev3", 00:10:40.180 "uuid": "735df7e7-1f14-4e56-b607-e8e287b39b4c", 00:10:40.180 "is_configured": true, 00:10:40.180 "data_offset": 2048, 00:10:40.180 "data_size": 63488 00:10:40.180 } 00:10:40.180 ] 00:10:40.180 }' 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.180 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.440 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:40.440 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:40.440 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.440 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.440 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.440 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.440 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:40.440 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.440 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.440 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.440 [2024-12-05 20:03:41.858555] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.701 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.701 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.701 "name": "Existed_Raid", 00:10:40.701 "aliases": [ 00:10:40.701 "32d95bc0-56e9-4a31-99ed-d362b2f3bbe4" 00:10:40.701 ], 00:10:40.701 "product_name": "Raid Volume", 00:10:40.701 "block_size": 512, 00:10:40.701 "num_blocks": 190464, 00:10:40.701 "uuid": "32d95bc0-56e9-4a31-99ed-d362b2f3bbe4", 00:10:40.701 "assigned_rate_limits": { 00:10:40.701 "rw_ios_per_sec": 0, 00:10:40.701 "rw_mbytes_per_sec": 0, 00:10:40.701 "r_mbytes_per_sec": 0, 00:10:40.701 "w_mbytes_per_sec": 0 00:10:40.701 }, 00:10:40.701 "claimed": false, 00:10:40.701 "zoned": false, 00:10:40.701 "supported_io_types": { 00:10:40.701 "read": true, 00:10:40.701 "write": true, 00:10:40.701 "unmap": true, 00:10:40.701 "flush": true, 00:10:40.701 "reset": true, 00:10:40.701 "nvme_admin": false, 00:10:40.701 "nvme_io": false, 00:10:40.701 "nvme_io_md": false, 00:10:40.701 "write_zeroes": true, 00:10:40.701 "zcopy": false, 00:10:40.701 "get_zone_info": false, 00:10:40.701 "zone_management": false, 00:10:40.701 "zone_append": false, 00:10:40.701 "compare": false, 00:10:40.701 "compare_and_write": false, 00:10:40.701 "abort": false, 00:10:40.701 "seek_hole": false, 00:10:40.701 "seek_data": false, 00:10:40.701 "copy": false, 00:10:40.701 "nvme_iov_md": false 00:10:40.701 }, 00:10:40.701 "memory_domains": [ 00:10:40.701 { 00:10:40.701 "dma_device_id": "system", 00:10:40.701 "dma_device_type": 1 00:10:40.701 }, 00:10:40.701 { 00:10:40.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.701 "dma_device_type": 2 00:10:40.701 }, 00:10:40.701 { 00:10:40.701 "dma_device_id": "system", 00:10:40.701 "dma_device_type": 1 00:10:40.701 }, 00:10:40.701 { 00:10:40.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.701 "dma_device_type": 2 00:10:40.701 }, 00:10:40.701 { 00:10:40.701 "dma_device_id": "system", 00:10:40.701 "dma_device_type": 1 00:10:40.701 }, 00:10:40.701 { 00:10:40.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.701 "dma_device_type": 2 00:10:40.701 } 00:10:40.701 ], 00:10:40.701 "driver_specific": { 00:10:40.701 "raid": { 00:10:40.701 "uuid": "32d95bc0-56e9-4a31-99ed-d362b2f3bbe4", 00:10:40.701 "strip_size_kb": 64, 00:10:40.701 "state": "online", 00:10:40.701 "raid_level": "raid0", 00:10:40.701 "superblock": true, 00:10:40.701 "num_base_bdevs": 3, 00:10:40.701 "num_base_bdevs_discovered": 3, 00:10:40.701 "num_base_bdevs_operational": 3, 00:10:40.701 "base_bdevs_list": [ 00:10:40.701 { 00:10:40.701 "name": "BaseBdev1", 00:10:40.701 "uuid": "f17816fd-a31c-46ae-b704-0c42194e700a", 00:10:40.701 "is_configured": true, 00:10:40.701 "data_offset": 2048, 00:10:40.701 "data_size": 63488 00:10:40.701 }, 00:10:40.701 { 00:10:40.701 "name": "BaseBdev2", 00:10:40.701 "uuid": "4805f151-15b9-4206-849f-6ea9101f511f", 00:10:40.701 "is_configured": true, 00:10:40.701 "data_offset": 2048, 00:10:40.701 "data_size": 63488 00:10:40.701 }, 00:10:40.701 { 00:10:40.701 "name": "BaseBdev3", 00:10:40.701 "uuid": "735df7e7-1f14-4e56-b607-e8e287b39b4c", 00:10:40.701 "is_configured": true, 00:10:40.701 "data_offset": 2048, 00:10:40.701 "data_size": 63488 00:10:40.701 } 00:10:40.701 ] 00:10:40.701 } 00:10:40.701 } 00:10:40.701 }' 00:10:40.701 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.701 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:40.701 BaseBdev2 00:10:40.701 BaseBdev3' 00:10:40.701 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.701 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.701 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.701 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.701 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:40.701 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.701 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.701 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.701 [2024-12-05 20:03:42.129831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:40.701 [2024-12-05 20:03:42.129933] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.701 [2024-12-05 20:03:42.129997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.961 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.961 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:40.961 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:40.961 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.961 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:40.961 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:40.961 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:40.961 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.961 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:40.961 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.961 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.961 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.961 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.961 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.961 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.961 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.961 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.962 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.962 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.962 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.962 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.962 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.962 "name": "Existed_Raid", 00:10:40.962 "uuid": "32d95bc0-56e9-4a31-99ed-d362b2f3bbe4", 00:10:40.962 "strip_size_kb": 64, 00:10:40.962 "state": "offline", 00:10:40.962 "raid_level": "raid0", 00:10:40.962 "superblock": true, 00:10:40.962 "num_base_bdevs": 3, 00:10:40.962 "num_base_bdevs_discovered": 2, 00:10:40.962 "num_base_bdevs_operational": 2, 00:10:40.962 "base_bdevs_list": [ 00:10:40.962 { 00:10:40.962 "name": null, 00:10:40.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.962 "is_configured": false, 00:10:40.962 "data_offset": 0, 00:10:40.962 "data_size": 63488 00:10:40.962 }, 00:10:40.962 { 00:10:40.962 "name": "BaseBdev2", 00:10:40.962 "uuid": "4805f151-15b9-4206-849f-6ea9101f511f", 00:10:40.962 "is_configured": true, 00:10:40.962 "data_offset": 2048, 00:10:40.962 "data_size": 63488 00:10:40.962 }, 00:10:40.962 { 00:10:40.962 "name": "BaseBdev3", 00:10:40.962 "uuid": "735df7e7-1f14-4e56-b607-e8e287b39b4c", 00:10:40.962 "is_configured": true, 00:10:40.962 "data_offset": 2048, 00:10:40.962 "data_size": 63488 00:10:40.962 } 00:10:40.962 ] 00:10:40.962 }' 00:10:40.962 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.962 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.532 [2024-12-05 20:03:42.724047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.532 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.532 [2024-12-05 20:03:42.874621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:41.533 [2024-12-05 20:03:42.874674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:41.793 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.793 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.793 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.793 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.793 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:41.793 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.793 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.793 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.793 BaseBdev2 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.793 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.794 [ 00:10:41.794 { 00:10:41.794 "name": "BaseBdev2", 00:10:41.794 "aliases": [ 00:10:41.794 "f1ee94aa-352c-4dff-a33c-2134fb04a0f9" 00:10:41.794 ], 00:10:41.794 "product_name": "Malloc disk", 00:10:41.794 "block_size": 512, 00:10:41.794 "num_blocks": 65536, 00:10:41.794 "uuid": "f1ee94aa-352c-4dff-a33c-2134fb04a0f9", 00:10:41.794 "assigned_rate_limits": { 00:10:41.794 "rw_ios_per_sec": 0, 00:10:41.794 "rw_mbytes_per_sec": 0, 00:10:41.794 "r_mbytes_per_sec": 0, 00:10:41.794 "w_mbytes_per_sec": 0 00:10:41.794 }, 00:10:41.794 "claimed": false, 00:10:41.794 "zoned": false, 00:10:41.794 "supported_io_types": { 00:10:41.794 "read": true, 00:10:41.794 "write": true, 00:10:41.794 "unmap": true, 00:10:41.794 "flush": true, 00:10:41.794 "reset": true, 00:10:41.794 "nvme_admin": false, 00:10:41.794 "nvme_io": false, 00:10:41.794 "nvme_io_md": false, 00:10:41.794 "write_zeroes": true, 00:10:41.794 "zcopy": true, 00:10:41.794 "get_zone_info": false, 00:10:41.794 "zone_management": false, 00:10:41.794 "zone_append": false, 00:10:41.794 "compare": false, 00:10:41.794 "compare_and_write": false, 00:10:41.794 "abort": true, 00:10:41.794 "seek_hole": false, 00:10:41.794 "seek_data": false, 00:10:41.794 "copy": true, 00:10:41.794 "nvme_iov_md": false 00:10:41.794 }, 00:10:41.794 "memory_domains": [ 00:10:41.794 { 00:10:41.794 "dma_device_id": "system", 00:10:41.794 "dma_device_type": 1 00:10:41.794 }, 00:10:41.794 { 00:10:41.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.794 "dma_device_type": 2 00:10:41.794 } 00:10:41.794 ], 00:10:41.794 "driver_specific": {} 00:10:41.794 } 00:10:41.794 ] 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.794 BaseBdev3 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.794 [ 00:10:41.794 { 00:10:41.794 "name": "BaseBdev3", 00:10:41.794 "aliases": [ 00:10:41.794 "d778db3a-b5b0-4f0e-80b5-80068e3a8528" 00:10:41.794 ], 00:10:41.794 "product_name": "Malloc disk", 00:10:41.794 "block_size": 512, 00:10:41.794 "num_blocks": 65536, 00:10:41.794 "uuid": "d778db3a-b5b0-4f0e-80b5-80068e3a8528", 00:10:41.794 "assigned_rate_limits": { 00:10:41.794 "rw_ios_per_sec": 0, 00:10:41.794 "rw_mbytes_per_sec": 0, 00:10:41.794 "r_mbytes_per_sec": 0, 00:10:41.794 "w_mbytes_per_sec": 0 00:10:41.794 }, 00:10:41.794 "claimed": false, 00:10:41.794 "zoned": false, 00:10:41.794 "supported_io_types": { 00:10:41.794 "read": true, 00:10:41.794 "write": true, 00:10:41.794 "unmap": true, 00:10:41.794 "flush": true, 00:10:41.794 "reset": true, 00:10:41.794 "nvme_admin": false, 00:10:41.794 "nvme_io": false, 00:10:41.794 "nvme_io_md": false, 00:10:41.794 "write_zeroes": true, 00:10:41.794 "zcopy": true, 00:10:41.794 "get_zone_info": false, 00:10:41.794 "zone_management": false, 00:10:41.794 "zone_append": false, 00:10:41.794 "compare": false, 00:10:41.794 "compare_and_write": false, 00:10:41.794 "abort": true, 00:10:41.794 "seek_hole": false, 00:10:41.794 "seek_data": false, 00:10:41.794 "copy": true, 00:10:41.794 "nvme_iov_md": false 00:10:41.794 }, 00:10:41.794 "memory_domains": [ 00:10:41.794 { 00:10:41.794 "dma_device_id": "system", 00:10:41.794 "dma_device_type": 1 00:10:41.794 }, 00:10:41.794 { 00:10:41.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.794 "dma_device_type": 2 00:10:41.794 } 00:10:41.794 ], 00:10:41.794 "driver_specific": {} 00:10:41.794 } 00:10:41.794 ] 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.794 [2024-12-05 20:03:43.194028] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.794 [2024-12-05 20:03:43.194112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.794 [2024-12-05 20:03:43.194153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.794 [2024-12-05 20:03:43.195959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.794 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.055 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.055 "name": "Existed_Raid", 00:10:42.055 "uuid": "e9e1326e-b645-4748-855b-958f4eaba22c", 00:10:42.055 "strip_size_kb": 64, 00:10:42.055 "state": "configuring", 00:10:42.055 "raid_level": "raid0", 00:10:42.055 "superblock": true, 00:10:42.055 "num_base_bdevs": 3, 00:10:42.055 "num_base_bdevs_discovered": 2, 00:10:42.055 "num_base_bdevs_operational": 3, 00:10:42.055 "base_bdevs_list": [ 00:10:42.055 { 00:10:42.055 "name": "BaseBdev1", 00:10:42.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.055 "is_configured": false, 00:10:42.055 "data_offset": 0, 00:10:42.055 "data_size": 0 00:10:42.055 }, 00:10:42.055 { 00:10:42.055 "name": "BaseBdev2", 00:10:42.055 "uuid": "f1ee94aa-352c-4dff-a33c-2134fb04a0f9", 00:10:42.055 "is_configured": true, 00:10:42.055 "data_offset": 2048, 00:10:42.055 "data_size": 63488 00:10:42.055 }, 00:10:42.055 { 00:10:42.055 "name": "BaseBdev3", 00:10:42.055 "uuid": "d778db3a-b5b0-4f0e-80b5-80068e3a8528", 00:10:42.055 "is_configured": true, 00:10:42.055 "data_offset": 2048, 00:10:42.055 "data_size": 63488 00:10:42.055 } 00:10:42.055 ] 00:10:42.055 }' 00:10:42.055 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.055 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.315 [2024-12-05 20:03:43.661258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.315 "name": "Existed_Raid", 00:10:42.315 "uuid": "e9e1326e-b645-4748-855b-958f4eaba22c", 00:10:42.315 "strip_size_kb": 64, 00:10:42.315 "state": "configuring", 00:10:42.315 "raid_level": "raid0", 00:10:42.315 "superblock": true, 00:10:42.315 "num_base_bdevs": 3, 00:10:42.315 "num_base_bdevs_discovered": 1, 00:10:42.315 "num_base_bdevs_operational": 3, 00:10:42.315 "base_bdevs_list": [ 00:10:42.315 { 00:10:42.315 "name": "BaseBdev1", 00:10:42.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.315 "is_configured": false, 00:10:42.315 "data_offset": 0, 00:10:42.315 "data_size": 0 00:10:42.315 }, 00:10:42.315 { 00:10:42.315 "name": null, 00:10:42.315 "uuid": "f1ee94aa-352c-4dff-a33c-2134fb04a0f9", 00:10:42.315 "is_configured": false, 00:10:42.315 "data_offset": 0, 00:10:42.315 "data_size": 63488 00:10:42.315 }, 00:10:42.315 { 00:10:42.315 "name": "BaseBdev3", 00:10:42.315 "uuid": "d778db3a-b5b0-4f0e-80b5-80068e3a8528", 00:10:42.315 "is_configured": true, 00:10:42.315 "data_offset": 2048, 00:10:42.315 "data_size": 63488 00:10:42.315 } 00:10:42.315 ] 00:10:42.315 }' 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.315 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.884 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.884 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:42.884 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.885 [2024-12-05 20:03:44.141592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.885 BaseBdev1 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.885 [ 00:10:42.885 { 00:10:42.885 "name": "BaseBdev1", 00:10:42.885 "aliases": [ 00:10:42.885 "dab836df-4d67-4424-a34b-53977609a5c0" 00:10:42.885 ], 00:10:42.885 "product_name": "Malloc disk", 00:10:42.885 "block_size": 512, 00:10:42.885 "num_blocks": 65536, 00:10:42.885 "uuid": "dab836df-4d67-4424-a34b-53977609a5c0", 00:10:42.885 "assigned_rate_limits": { 00:10:42.885 "rw_ios_per_sec": 0, 00:10:42.885 "rw_mbytes_per_sec": 0, 00:10:42.885 "r_mbytes_per_sec": 0, 00:10:42.885 "w_mbytes_per_sec": 0 00:10:42.885 }, 00:10:42.885 "claimed": true, 00:10:42.885 "claim_type": "exclusive_write", 00:10:42.885 "zoned": false, 00:10:42.885 "supported_io_types": { 00:10:42.885 "read": true, 00:10:42.885 "write": true, 00:10:42.885 "unmap": true, 00:10:42.885 "flush": true, 00:10:42.885 "reset": true, 00:10:42.885 "nvme_admin": false, 00:10:42.885 "nvme_io": false, 00:10:42.885 "nvme_io_md": false, 00:10:42.885 "write_zeroes": true, 00:10:42.885 "zcopy": true, 00:10:42.885 "get_zone_info": false, 00:10:42.885 "zone_management": false, 00:10:42.885 "zone_append": false, 00:10:42.885 "compare": false, 00:10:42.885 "compare_and_write": false, 00:10:42.885 "abort": true, 00:10:42.885 "seek_hole": false, 00:10:42.885 "seek_data": false, 00:10:42.885 "copy": true, 00:10:42.885 "nvme_iov_md": false 00:10:42.885 }, 00:10:42.885 "memory_domains": [ 00:10:42.885 { 00:10:42.885 "dma_device_id": "system", 00:10:42.885 "dma_device_type": 1 00:10:42.885 }, 00:10:42.885 { 00:10:42.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.885 "dma_device_type": 2 00:10:42.885 } 00:10:42.885 ], 00:10:42.885 "driver_specific": {} 00:10:42.885 } 00:10:42.885 ] 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.885 "name": "Existed_Raid", 00:10:42.885 "uuid": "e9e1326e-b645-4748-855b-958f4eaba22c", 00:10:42.885 "strip_size_kb": 64, 00:10:42.885 "state": "configuring", 00:10:42.885 "raid_level": "raid0", 00:10:42.885 "superblock": true, 00:10:42.885 "num_base_bdevs": 3, 00:10:42.885 "num_base_bdevs_discovered": 2, 00:10:42.885 "num_base_bdevs_operational": 3, 00:10:42.885 "base_bdevs_list": [ 00:10:42.885 { 00:10:42.885 "name": "BaseBdev1", 00:10:42.885 "uuid": "dab836df-4d67-4424-a34b-53977609a5c0", 00:10:42.885 "is_configured": true, 00:10:42.885 "data_offset": 2048, 00:10:42.885 "data_size": 63488 00:10:42.885 }, 00:10:42.885 { 00:10:42.885 "name": null, 00:10:42.885 "uuid": "f1ee94aa-352c-4dff-a33c-2134fb04a0f9", 00:10:42.885 "is_configured": false, 00:10:42.885 "data_offset": 0, 00:10:42.885 "data_size": 63488 00:10:42.885 }, 00:10:42.885 { 00:10:42.885 "name": "BaseBdev3", 00:10:42.885 "uuid": "d778db3a-b5b0-4f0e-80b5-80068e3a8528", 00:10:42.885 "is_configured": true, 00:10:42.885 "data_offset": 2048, 00:10:42.885 "data_size": 63488 00:10:42.885 } 00:10:42.885 ] 00:10:42.885 }' 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.885 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.453 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.453 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:43.453 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.453 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.453 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.453 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:43.453 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:43.453 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.453 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.454 [2024-12-05 20:03:44.676725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.454 "name": "Existed_Raid", 00:10:43.454 "uuid": "e9e1326e-b645-4748-855b-958f4eaba22c", 00:10:43.454 "strip_size_kb": 64, 00:10:43.454 "state": "configuring", 00:10:43.454 "raid_level": "raid0", 00:10:43.454 "superblock": true, 00:10:43.454 "num_base_bdevs": 3, 00:10:43.454 "num_base_bdevs_discovered": 1, 00:10:43.454 "num_base_bdevs_operational": 3, 00:10:43.454 "base_bdevs_list": [ 00:10:43.454 { 00:10:43.454 "name": "BaseBdev1", 00:10:43.454 "uuid": "dab836df-4d67-4424-a34b-53977609a5c0", 00:10:43.454 "is_configured": true, 00:10:43.454 "data_offset": 2048, 00:10:43.454 "data_size": 63488 00:10:43.454 }, 00:10:43.454 { 00:10:43.454 "name": null, 00:10:43.454 "uuid": "f1ee94aa-352c-4dff-a33c-2134fb04a0f9", 00:10:43.454 "is_configured": false, 00:10:43.454 "data_offset": 0, 00:10:43.454 "data_size": 63488 00:10:43.454 }, 00:10:43.454 { 00:10:43.454 "name": null, 00:10:43.454 "uuid": "d778db3a-b5b0-4f0e-80b5-80068e3a8528", 00:10:43.454 "is_configured": false, 00:10:43.454 "data_offset": 0, 00:10:43.454 "data_size": 63488 00:10:43.454 } 00:10:43.454 ] 00:10:43.454 }' 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.454 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.714 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.714 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:43.714 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.714 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.714 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.973 [2024-12-05 20:03:45.160018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.973 "name": "Existed_Raid", 00:10:43.973 "uuid": "e9e1326e-b645-4748-855b-958f4eaba22c", 00:10:43.973 "strip_size_kb": 64, 00:10:43.973 "state": "configuring", 00:10:43.973 "raid_level": "raid0", 00:10:43.973 "superblock": true, 00:10:43.973 "num_base_bdevs": 3, 00:10:43.973 "num_base_bdevs_discovered": 2, 00:10:43.973 "num_base_bdevs_operational": 3, 00:10:43.973 "base_bdevs_list": [ 00:10:43.973 { 00:10:43.973 "name": "BaseBdev1", 00:10:43.973 "uuid": "dab836df-4d67-4424-a34b-53977609a5c0", 00:10:43.973 "is_configured": true, 00:10:43.973 "data_offset": 2048, 00:10:43.973 "data_size": 63488 00:10:43.973 }, 00:10:43.973 { 00:10:43.973 "name": null, 00:10:43.973 "uuid": "f1ee94aa-352c-4dff-a33c-2134fb04a0f9", 00:10:43.973 "is_configured": false, 00:10:43.973 "data_offset": 0, 00:10:43.973 "data_size": 63488 00:10:43.973 }, 00:10:43.973 { 00:10:43.973 "name": "BaseBdev3", 00:10:43.973 "uuid": "d778db3a-b5b0-4f0e-80b5-80068e3a8528", 00:10:43.973 "is_configured": true, 00:10:43.973 "data_offset": 2048, 00:10:43.973 "data_size": 63488 00:10:43.973 } 00:10:43.973 ] 00:10:43.973 }' 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.973 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.232 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.232 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.232 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.232 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.232 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.520 [2024-12-05 20:03:45.691074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.520 "name": "Existed_Raid", 00:10:44.520 "uuid": "e9e1326e-b645-4748-855b-958f4eaba22c", 00:10:44.520 "strip_size_kb": 64, 00:10:44.520 "state": "configuring", 00:10:44.520 "raid_level": "raid0", 00:10:44.520 "superblock": true, 00:10:44.520 "num_base_bdevs": 3, 00:10:44.520 "num_base_bdevs_discovered": 1, 00:10:44.520 "num_base_bdevs_operational": 3, 00:10:44.520 "base_bdevs_list": [ 00:10:44.520 { 00:10:44.520 "name": null, 00:10:44.520 "uuid": "dab836df-4d67-4424-a34b-53977609a5c0", 00:10:44.520 "is_configured": false, 00:10:44.520 "data_offset": 0, 00:10:44.520 "data_size": 63488 00:10:44.520 }, 00:10:44.520 { 00:10:44.520 "name": null, 00:10:44.520 "uuid": "f1ee94aa-352c-4dff-a33c-2134fb04a0f9", 00:10:44.520 "is_configured": false, 00:10:44.520 "data_offset": 0, 00:10:44.520 "data_size": 63488 00:10:44.520 }, 00:10:44.520 { 00:10:44.520 "name": "BaseBdev3", 00:10:44.520 "uuid": "d778db3a-b5b0-4f0e-80b5-80068e3a8528", 00:10:44.520 "is_configured": true, 00:10:44.520 "data_offset": 2048, 00:10:44.520 "data_size": 63488 00:10:44.520 } 00:10:44.520 ] 00:10:44.520 }' 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.520 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.117 [2024-12-05 20:03:46.318851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.117 "name": "Existed_Raid", 00:10:45.117 "uuid": "e9e1326e-b645-4748-855b-958f4eaba22c", 00:10:45.117 "strip_size_kb": 64, 00:10:45.117 "state": "configuring", 00:10:45.117 "raid_level": "raid0", 00:10:45.117 "superblock": true, 00:10:45.117 "num_base_bdevs": 3, 00:10:45.117 "num_base_bdevs_discovered": 2, 00:10:45.117 "num_base_bdevs_operational": 3, 00:10:45.117 "base_bdevs_list": [ 00:10:45.117 { 00:10:45.117 "name": null, 00:10:45.117 "uuid": "dab836df-4d67-4424-a34b-53977609a5c0", 00:10:45.117 "is_configured": false, 00:10:45.117 "data_offset": 0, 00:10:45.117 "data_size": 63488 00:10:45.117 }, 00:10:45.117 { 00:10:45.117 "name": "BaseBdev2", 00:10:45.117 "uuid": "f1ee94aa-352c-4dff-a33c-2134fb04a0f9", 00:10:45.117 "is_configured": true, 00:10:45.117 "data_offset": 2048, 00:10:45.117 "data_size": 63488 00:10:45.117 }, 00:10:45.117 { 00:10:45.117 "name": "BaseBdev3", 00:10:45.117 "uuid": "d778db3a-b5b0-4f0e-80b5-80068e3a8528", 00:10:45.117 "is_configured": true, 00:10:45.117 "data_offset": 2048, 00:10:45.117 "data_size": 63488 00:10:45.117 } 00:10:45.117 ] 00:10:45.117 }' 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.117 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.377 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.377 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.377 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.377 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.377 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.377 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:45.377 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.377 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.377 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.377 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:45.377 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dab836df-4d67-4424-a34b-53977609a5c0 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.746 [2024-12-05 20:03:46.874596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:45.746 [2024-12-05 20:03:46.874912] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:45.746 [2024-12-05 20:03:46.874967] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:45.746 [2024-12-05 20:03:46.875236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:45.746 [2024-12-05 20:03:46.875428] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:45.746 [2024-12-05 20:03:46.875471] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:10:45.746 id_bdev 0x617000008200 00:10:45.746 [2024-12-05 20:03:46.875664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.746 [ 00:10:45.746 { 00:10:45.746 "name": "NewBaseBdev", 00:10:45.746 "aliases": [ 00:10:45.746 "dab836df-4d67-4424-a34b-53977609a5c0" 00:10:45.746 ], 00:10:45.746 "product_name": "Malloc disk", 00:10:45.746 "block_size": 512, 00:10:45.746 "num_blocks": 65536, 00:10:45.746 "uuid": "dab836df-4d67-4424-a34b-53977609a5c0", 00:10:45.746 "assigned_rate_limits": { 00:10:45.746 "rw_ios_per_sec": 0, 00:10:45.746 "rw_mbytes_per_sec": 0, 00:10:45.746 "r_mbytes_per_sec": 0, 00:10:45.746 "w_mbytes_per_sec": 0 00:10:45.746 }, 00:10:45.746 "claimed": true, 00:10:45.746 "claim_type": "exclusive_write", 00:10:45.746 "zoned": false, 00:10:45.746 "supported_io_types": { 00:10:45.746 "read": true, 00:10:45.746 "write": true, 00:10:45.746 "unmap": true, 00:10:45.746 "flush": true, 00:10:45.746 "reset": true, 00:10:45.746 "nvme_admin": false, 00:10:45.746 "nvme_io": false, 00:10:45.746 "nvme_io_md": false, 00:10:45.746 "write_zeroes": true, 00:10:45.746 "zcopy": true, 00:10:45.746 "get_zone_info": false, 00:10:45.746 "zone_management": false, 00:10:45.746 "zone_append": false, 00:10:45.746 "compare": false, 00:10:45.746 "compare_and_write": false, 00:10:45.746 "abort": true, 00:10:45.746 "seek_hole": false, 00:10:45.746 "seek_data": false, 00:10:45.746 "copy": true, 00:10:45.746 "nvme_iov_md": false 00:10:45.746 }, 00:10:45.746 "memory_domains": [ 00:10:45.746 { 00:10:45.746 "dma_device_id": "system", 00:10:45.746 "dma_device_type": 1 00:10:45.746 }, 00:10:45.746 { 00:10:45.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.746 "dma_device_type": 2 00:10:45.746 } 00:10:45.746 ], 00:10:45.746 "driver_specific": {} 00:10:45.746 } 00:10:45.746 ] 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.746 "name": "Existed_Raid", 00:10:45.746 "uuid": "e9e1326e-b645-4748-855b-958f4eaba22c", 00:10:45.746 "strip_size_kb": 64, 00:10:45.746 "state": "online", 00:10:45.746 "raid_level": "raid0", 00:10:45.746 "superblock": true, 00:10:45.746 "num_base_bdevs": 3, 00:10:45.746 "num_base_bdevs_discovered": 3, 00:10:45.746 "num_base_bdevs_operational": 3, 00:10:45.746 "base_bdevs_list": [ 00:10:45.746 { 00:10:45.746 "name": "NewBaseBdev", 00:10:45.746 "uuid": "dab836df-4d67-4424-a34b-53977609a5c0", 00:10:45.746 "is_configured": true, 00:10:45.746 "data_offset": 2048, 00:10:45.746 "data_size": 63488 00:10:45.746 }, 00:10:45.746 { 00:10:45.746 "name": "BaseBdev2", 00:10:45.746 "uuid": "f1ee94aa-352c-4dff-a33c-2134fb04a0f9", 00:10:45.746 "is_configured": true, 00:10:45.746 "data_offset": 2048, 00:10:45.746 "data_size": 63488 00:10:45.746 }, 00:10:45.746 { 00:10:45.746 "name": "BaseBdev3", 00:10:45.746 "uuid": "d778db3a-b5b0-4f0e-80b5-80068e3a8528", 00:10:45.746 "is_configured": true, 00:10:45.746 "data_offset": 2048, 00:10:45.746 "data_size": 63488 00:10:45.746 } 00:10:45.746 ] 00:10:45.746 }' 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.746 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.007 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:46.007 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:46.007 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.007 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.007 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.007 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.007 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:46.007 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.007 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.007 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.007 [2024-12-05 20:03:47.374104] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.007 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.007 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.007 "name": "Existed_Raid", 00:10:46.007 "aliases": [ 00:10:46.007 "e9e1326e-b645-4748-855b-958f4eaba22c" 00:10:46.007 ], 00:10:46.007 "product_name": "Raid Volume", 00:10:46.007 "block_size": 512, 00:10:46.007 "num_blocks": 190464, 00:10:46.007 "uuid": "e9e1326e-b645-4748-855b-958f4eaba22c", 00:10:46.007 "assigned_rate_limits": { 00:10:46.007 "rw_ios_per_sec": 0, 00:10:46.007 "rw_mbytes_per_sec": 0, 00:10:46.007 "r_mbytes_per_sec": 0, 00:10:46.007 "w_mbytes_per_sec": 0 00:10:46.007 }, 00:10:46.007 "claimed": false, 00:10:46.007 "zoned": false, 00:10:46.007 "supported_io_types": { 00:10:46.007 "read": true, 00:10:46.007 "write": true, 00:10:46.007 "unmap": true, 00:10:46.007 "flush": true, 00:10:46.007 "reset": true, 00:10:46.007 "nvme_admin": false, 00:10:46.007 "nvme_io": false, 00:10:46.007 "nvme_io_md": false, 00:10:46.007 "write_zeroes": true, 00:10:46.007 "zcopy": false, 00:10:46.007 "get_zone_info": false, 00:10:46.007 "zone_management": false, 00:10:46.007 "zone_append": false, 00:10:46.007 "compare": false, 00:10:46.007 "compare_and_write": false, 00:10:46.007 "abort": false, 00:10:46.007 "seek_hole": false, 00:10:46.007 "seek_data": false, 00:10:46.007 "copy": false, 00:10:46.007 "nvme_iov_md": false 00:10:46.007 }, 00:10:46.007 "memory_domains": [ 00:10:46.007 { 00:10:46.007 "dma_device_id": "system", 00:10:46.007 "dma_device_type": 1 00:10:46.007 }, 00:10:46.007 { 00:10:46.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.007 "dma_device_type": 2 00:10:46.007 }, 00:10:46.007 { 00:10:46.007 "dma_device_id": "system", 00:10:46.007 "dma_device_type": 1 00:10:46.007 }, 00:10:46.007 { 00:10:46.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.007 "dma_device_type": 2 00:10:46.007 }, 00:10:46.007 { 00:10:46.007 "dma_device_id": "system", 00:10:46.007 "dma_device_type": 1 00:10:46.007 }, 00:10:46.007 { 00:10:46.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.007 "dma_device_type": 2 00:10:46.007 } 00:10:46.007 ], 00:10:46.007 "driver_specific": { 00:10:46.007 "raid": { 00:10:46.007 "uuid": "e9e1326e-b645-4748-855b-958f4eaba22c", 00:10:46.007 "strip_size_kb": 64, 00:10:46.007 "state": "online", 00:10:46.007 "raid_level": "raid0", 00:10:46.007 "superblock": true, 00:10:46.007 "num_base_bdevs": 3, 00:10:46.007 "num_base_bdevs_discovered": 3, 00:10:46.007 "num_base_bdevs_operational": 3, 00:10:46.007 "base_bdevs_list": [ 00:10:46.007 { 00:10:46.007 "name": "NewBaseBdev", 00:10:46.007 "uuid": "dab836df-4d67-4424-a34b-53977609a5c0", 00:10:46.007 "is_configured": true, 00:10:46.007 "data_offset": 2048, 00:10:46.007 "data_size": 63488 00:10:46.007 }, 00:10:46.007 { 00:10:46.007 "name": "BaseBdev2", 00:10:46.007 "uuid": "f1ee94aa-352c-4dff-a33c-2134fb04a0f9", 00:10:46.007 "is_configured": true, 00:10:46.007 "data_offset": 2048, 00:10:46.007 "data_size": 63488 00:10:46.007 }, 00:10:46.007 { 00:10:46.007 "name": "BaseBdev3", 00:10:46.007 "uuid": "d778db3a-b5b0-4f0e-80b5-80068e3a8528", 00:10:46.007 "is_configured": true, 00:10:46.007 "data_offset": 2048, 00:10:46.007 "data_size": 63488 00:10:46.007 } 00:10:46.007 ] 00:10:46.007 } 00:10:46.007 } 00:10:46.007 }' 00:10:46.007 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:46.267 BaseBdev2 00:10:46.267 BaseBdev3' 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.267 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.268 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.268 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.268 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.268 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.268 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.268 [2024-12-05 20:03:47.661311] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.268 [2024-12-05 20:03:47.661340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.268 [2024-12-05 20:03:47.661414] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.268 [2024-12-05 20:03:47.661470] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.268 [2024-12-05 20:03:47.661482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:46.268 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.268 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64557 00:10:46.268 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64557 ']' 00:10:46.268 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64557 00:10:46.268 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:46.268 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.268 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64557 00:10:46.527 killing process with pid 64557 00:10:46.527 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.527 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.527 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64557' 00:10:46.527 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64557 00:10:46.527 [2024-12-05 20:03:47.708098] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.527 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64557 00:10:46.785 [2024-12-05 20:03:48.022296] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.163 20:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:48.163 00:10:48.163 real 0m10.810s 00:10:48.163 user 0m17.256s 00:10:48.163 sys 0m1.868s 00:10:48.163 20:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.163 20:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.164 ************************************ 00:10:48.164 END TEST raid_state_function_test_sb 00:10:48.164 ************************************ 00:10:48.164 20:03:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:48.164 20:03:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:48.164 20:03:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.164 20:03:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.164 ************************************ 00:10:48.164 START TEST raid_superblock_test 00:10:48.164 ************************************ 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65183 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65183 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65183 ']' 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.164 20:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.164 [2024-12-05 20:03:49.335455] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:10:48.164 [2024-12-05 20:03:49.335601] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65183 ] 00:10:48.164 [2024-12-05 20:03:49.512050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.430 [2024-12-05 20:03:49.622173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.430 [2024-12-05 20:03:49.819392] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.430 [2024-12-05 20:03:49.819421] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.999 malloc1 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.999 [2024-12-05 20:03:50.217384] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:48.999 [2024-12-05 20:03:50.217492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.999 [2024-12-05 20:03:50.217531] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:48.999 [2024-12-05 20:03:50.217559] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.999 [2024-12-05 20:03:50.219646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.999 [2024-12-05 20:03:50.219716] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:48.999 pt1 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.999 malloc2 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.999 [2024-12-05 20:03:50.277599] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:48.999 [2024-12-05 20:03:50.277716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.999 [2024-12-05 20:03:50.277765] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:48.999 [2024-12-05 20:03:50.277802] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.999 [2024-12-05 20:03:50.280110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.999 [2024-12-05 20:03:50.280186] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:48.999 pt2 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.999 malloc3 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.999 [2024-12-05 20:03:50.342180] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:48.999 [2024-12-05 20:03:50.342306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.999 [2024-12-05 20:03:50.342348] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:48.999 [2024-12-05 20:03:50.342377] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.999 [2024-12-05 20:03:50.344668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.999 [2024-12-05 20:03:50.344747] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:48.999 pt3 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.999 [2024-12-05 20:03:50.358227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:48.999 [2024-12-05 20:03:50.360104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:48.999 [2024-12-05 20:03:50.360242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:48.999 [2024-12-05 20:03:50.360444] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:48.999 [2024-12-05 20:03:50.360495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:48.999 [2024-12-05 20:03:50.360809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:48.999 [2024-12-05 20:03:50.361047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:48.999 [2024-12-05 20:03:50.361091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:48.999 [2024-12-05 20:03:50.361326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.999 "name": "raid_bdev1", 00:10:48.999 "uuid": "f33fea34-67ea-48ce-8fda-b39b4b2fa2e8", 00:10:48.999 "strip_size_kb": 64, 00:10:48.999 "state": "online", 00:10:48.999 "raid_level": "raid0", 00:10:48.999 "superblock": true, 00:10:48.999 "num_base_bdevs": 3, 00:10:48.999 "num_base_bdevs_discovered": 3, 00:10:48.999 "num_base_bdevs_operational": 3, 00:10:48.999 "base_bdevs_list": [ 00:10:48.999 { 00:10:48.999 "name": "pt1", 00:10:48.999 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.999 "is_configured": true, 00:10:48.999 "data_offset": 2048, 00:10:48.999 "data_size": 63488 00:10:48.999 }, 00:10:48.999 { 00:10:48.999 "name": "pt2", 00:10:48.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.999 "is_configured": true, 00:10:48.999 "data_offset": 2048, 00:10:48.999 "data_size": 63488 00:10:48.999 }, 00:10:48.999 { 00:10:48.999 "name": "pt3", 00:10:48.999 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.999 "is_configured": true, 00:10:48.999 "data_offset": 2048, 00:10:48.999 "data_size": 63488 00:10:48.999 } 00:10:48.999 ] 00:10:48.999 }' 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.999 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.567 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:49.567 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:49.567 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.567 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.567 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.567 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.567 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:49.567 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.567 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.567 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.567 [2024-12-05 20:03:50.813728] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.567 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.567 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.567 "name": "raid_bdev1", 00:10:49.567 "aliases": [ 00:10:49.567 "f33fea34-67ea-48ce-8fda-b39b4b2fa2e8" 00:10:49.567 ], 00:10:49.567 "product_name": "Raid Volume", 00:10:49.567 "block_size": 512, 00:10:49.567 "num_blocks": 190464, 00:10:49.567 "uuid": "f33fea34-67ea-48ce-8fda-b39b4b2fa2e8", 00:10:49.567 "assigned_rate_limits": { 00:10:49.567 "rw_ios_per_sec": 0, 00:10:49.567 "rw_mbytes_per_sec": 0, 00:10:49.567 "r_mbytes_per_sec": 0, 00:10:49.567 "w_mbytes_per_sec": 0 00:10:49.567 }, 00:10:49.567 "claimed": false, 00:10:49.567 "zoned": false, 00:10:49.567 "supported_io_types": { 00:10:49.567 "read": true, 00:10:49.567 "write": true, 00:10:49.567 "unmap": true, 00:10:49.567 "flush": true, 00:10:49.567 "reset": true, 00:10:49.567 "nvme_admin": false, 00:10:49.567 "nvme_io": false, 00:10:49.567 "nvme_io_md": false, 00:10:49.567 "write_zeroes": true, 00:10:49.567 "zcopy": false, 00:10:49.567 "get_zone_info": false, 00:10:49.567 "zone_management": false, 00:10:49.567 "zone_append": false, 00:10:49.567 "compare": false, 00:10:49.567 "compare_and_write": false, 00:10:49.567 "abort": false, 00:10:49.567 "seek_hole": false, 00:10:49.567 "seek_data": false, 00:10:49.567 "copy": false, 00:10:49.567 "nvme_iov_md": false 00:10:49.567 }, 00:10:49.567 "memory_domains": [ 00:10:49.567 { 00:10:49.567 "dma_device_id": "system", 00:10:49.567 "dma_device_type": 1 00:10:49.567 }, 00:10:49.567 { 00:10:49.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.567 "dma_device_type": 2 00:10:49.567 }, 00:10:49.567 { 00:10:49.567 "dma_device_id": "system", 00:10:49.567 "dma_device_type": 1 00:10:49.567 }, 00:10:49.567 { 00:10:49.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.567 "dma_device_type": 2 00:10:49.567 }, 00:10:49.567 { 00:10:49.567 "dma_device_id": "system", 00:10:49.567 "dma_device_type": 1 00:10:49.567 }, 00:10:49.567 { 00:10:49.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.567 "dma_device_type": 2 00:10:49.567 } 00:10:49.567 ], 00:10:49.567 "driver_specific": { 00:10:49.567 "raid": { 00:10:49.567 "uuid": "f33fea34-67ea-48ce-8fda-b39b4b2fa2e8", 00:10:49.567 "strip_size_kb": 64, 00:10:49.567 "state": "online", 00:10:49.567 "raid_level": "raid0", 00:10:49.567 "superblock": true, 00:10:49.567 "num_base_bdevs": 3, 00:10:49.567 "num_base_bdevs_discovered": 3, 00:10:49.567 "num_base_bdevs_operational": 3, 00:10:49.567 "base_bdevs_list": [ 00:10:49.567 { 00:10:49.567 "name": "pt1", 00:10:49.567 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.567 "is_configured": true, 00:10:49.567 "data_offset": 2048, 00:10:49.567 "data_size": 63488 00:10:49.567 }, 00:10:49.567 { 00:10:49.567 "name": "pt2", 00:10:49.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.567 "is_configured": true, 00:10:49.567 "data_offset": 2048, 00:10:49.567 "data_size": 63488 00:10:49.567 }, 00:10:49.567 { 00:10:49.568 "name": "pt3", 00:10:49.568 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.568 "is_configured": true, 00:10:49.568 "data_offset": 2048, 00:10:49.568 "data_size": 63488 00:10:49.568 } 00:10:49.568 ] 00:10:49.568 } 00:10:49.568 } 00:10:49.568 }' 00:10:49.568 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.568 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:49.568 pt2 00:10:49.568 pt3' 00:10:49.568 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.568 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:49.568 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.568 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:49.568 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.568 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.568 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.568 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.568 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.568 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.568 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.568 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:49.568 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.568 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.568 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.827 [2024-12-05 20:03:51.089216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f33fea34-67ea-48ce-8fda-b39b4b2fa2e8 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f33fea34-67ea-48ce-8fda-b39b4b2fa2e8 ']' 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.827 [2024-12-05 20:03:51.136800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.827 [2024-12-05 20:03:51.136829] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.827 [2024-12-05 20:03:51.136929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.827 [2024-12-05 20:03:51.136994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.827 [2024-12-05 20:03:51.137003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.827 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:49.828 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.088 [2024-12-05 20:03:51.288647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:50.088 [2024-12-05 20:03:51.290570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:50.088 [2024-12-05 20:03:51.290631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:50.088 [2024-12-05 20:03:51.290689] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:50.088 [2024-12-05 20:03:51.290745] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:50.088 [2024-12-05 20:03:51.290764] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:50.088 [2024-12-05 20:03:51.290781] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.088 [2024-12-05 20:03:51.290793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:50.088 request: 00:10:50.088 { 00:10:50.088 "name": "raid_bdev1", 00:10:50.088 "raid_level": "raid0", 00:10:50.088 "base_bdevs": [ 00:10:50.088 "malloc1", 00:10:50.088 "malloc2", 00:10:50.088 "malloc3" 00:10:50.088 ], 00:10:50.088 "strip_size_kb": 64, 00:10:50.088 "superblock": false, 00:10:50.088 "method": "bdev_raid_create", 00:10:50.088 "req_id": 1 00:10:50.088 } 00:10:50.088 Got JSON-RPC error response 00:10:50.088 response: 00:10:50.088 { 00:10:50.088 "code": -17, 00:10:50.088 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:50.088 } 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.088 [2024-12-05 20:03:51.352469] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:50.088 [2024-12-05 20:03:51.352599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.088 [2024-12-05 20:03:51.352644] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:50.088 [2024-12-05 20:03:51.352679] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.088 [2024-12-05 20:03:51.355122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.088 [2024-12-05 20:03:51.355198] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:50.088 [2024-12-05 20:03:51.355324] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:50.088 [2024-12-05 20:03:51.355414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:50.088 pt1 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.088 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.088 "name": "raid_bdev1", 00:10:50.088 "uuid": "f33fea34-67ea-48ce-8fda-b39b4b2fa2e8", 00:10:50.088 "strip_size_kb": 64, 00:10:50.088 "state": "configuring", 00:10:50.088 "raid_level": "raid0", 00:10:50.088 "superblock": true, 00:10:50.088 "num_base_bdevs": 3, 00:10:50.088 "num_base_bdevs_discovered": 1, 00:10:50.088 "num_base_bdevs_operational": 3, 00:10:50.088 "base_bdevs_list": [ 00:10:50.088 { 00:10:50.088 "name": "pt1", 00:10:50.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.088 "is_configured": true, 00:10:50.088 "data_offset": 2048, 00:10:50.088 "data_size": 63488 00:10:50.089 }, 00:10:50.089 { 00:10:50.089 "name": null, 00:10:50.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.089 "is_configured": false, 00:10:50.089 "data_offset": 2048, 00:10:50.089 "data_size": 63488 00:10:50.089 }, 00:10:50.089 { 00:10:50.089 "name": null, 00:10:50.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.089 "is_configured": false, 00:10:50.089 "data_offset": 2048, 00:10:50.089 "data_size": 63488 00:10:50.089 } 00:10:50.089 ] 00:10:50.089 }' 00:10:50.089 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.089 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.659 [2024-12-05 20:03:51.835662] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:50.659 [2024-12-05 20:03:51.835749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.659 [2024-12-05 20:03:51.835778] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:50.659 [2024-12-05 20:03:51.835788] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.659 [2024-12-05 20:03:51.836286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.659 [2024-12-05 20:03:51.836314] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:50.659 [2024-12-05 20:03:51.836411] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:50.659 [2024-12-05 20:03:51.836445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:50.659 pt2 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.659 [2024-12-05 20:03:51.843646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.659 "name": "raid_bdev1", 00:10:50.659 "uuid": "f33fea34-67ea-48ce-8fda-b39b4b2fa2e8", 00:10:50.659 "strip_size_kb": 64, 00:10:50.659 "state": "configuring", 00:10:50.659 "raid_level": "raid0", 00:10:50.659 "superblock": true, 00:10:50.659 "num_base_bdevs": 3, 00:10:50.659 "num_base_bdevs_discovered": 1, 00:10:50.659 "num_base_bdevs_operational": 3, 00:10:50.659 "base_bdevs_list": [ 00:10:50.659 { 00:10:50.659 "name": "pt1", 00:10:50.659 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.659 "is_configured": true, 00:10:50.659 "data_offset": 2048, 00:10:50.659 "data_size": 63488 00:10:50.659 }, 00:10:50.659 { 00:10:50.659 "name": null, 00:10:50.659 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.659 "is_configured": false, 00:10:50.659 "data_offset": 0, 00:10:50.659 "data_size": 63488 00:10:50.659 }, 00:10:50.659 { 00:10:50.659 "name": null, 00:10:50.659 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.659 "is_configured": false, 00:10:50.659 "data_offset": 2048, 00:10:50.659 "data_size": 63488 00:10:50.659 } 00:10:50.659 ] 00:10:50.659 }' 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.659 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.919 [2024-12-05 20:03:52.278886] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:50.919 [2024-12-05 20:03:52.279035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.919 [2024-12-05 20:03:52.279059] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:50.919 [2024-12-05 20:03:52.279071] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.919 [2024-12-05 20:03:52.279539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.919 [2024-12-05 20:03:52.279570] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:50.919 [2024-12-05 20:03:52.279658] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:50.919 [2024-12-05 20:03:52.279685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:50.919 pt2 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.919 [2024-12-05 20:03:52.290835] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:50.919 [2024-12-05 20:03:52.290896] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.919 [2024-12-05 20:03:52.290911] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:50.919 [2024-12-05 20:03:52.290937] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.919 [2024-12-05 20:03:52.291323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.919 [2024-12-05 20:03:52.291360] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:50.919 [2024-12-05 20:03:52.291423] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:50.919 [2024-12-05 20:03:52.291444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:50.919 [2024-12-05 20:03:52.291572] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:50.919 [2024-12-05 20:03:52.291588] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:50.919 [2024-12-05 20:03:52.291832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:50.919 [2024-12-05 20:03:52.291996] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:50.919 [2024-12-05 20:03:52.292006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:50.919 [2024-12-05 20:03:52.292169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.919 pt3 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.919 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.919 "name": "raid_bdev1", 00:10:50.919 "uuid": "f33fea34-67ea-48ce-8fda-b39b4b2fa2e8", 00:10:50.919 "strip_size_kb": 64, 00:10:50.919 "state": "online", 00:10:50.919 "raid_level": "raid0", 00:10:50.919 "superblock": true, 00:10:50.919 "num_base_bdevs": 3, 00:10:50.919 "num_base_bdevs_discovered": 3, 00:10:50.920 "num_base_bdevs_operational": 3, 00:10:50.920 "base_bdevs_list": [ 00:10:50.920 { 00:10:50.920 "name": "pt1", 00:10:50.920 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.920 "is_configured": true, 00:10:50.920 "data_offset": 2048, 00:10:50.920 "data_size": 63488 00:10:50.920 }, 00:10:50.920 { 00:10:50.920 "name": "pt2", 00:10:50.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.920 "is_configured": true, 00:10:50.920 "data_offset": 2048, 00:10:50.920 "data_size": 63488 00:10:50.920 }, 00:10:50.920 { 00:10:50.920 "name": "pt3", 00:10:50.920 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.920 "is_configured": true, 00:10:50.920 "data_offset": 2048, 00:10:50.920 "data_size": 63488 00:10:50.920 } 00:10:50.920 ] 00:10:50.920 }' 00:10:50.920 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.920 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.490 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:51.490 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:51.490 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.490 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.490 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.490 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.491 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.491 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.491 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.491 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.491 [2024-12-05 20:03:52.782365] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.491 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.491 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.491 "name": "raid_bdev1", 00:10:51.491 "aliases": [ 00:10:51.491 "f33fea34-67ea-48ce-8fda-b39b4b2fa2e8" 00:10:51.491 ], 00:10:51.491 "product_name": "Raid Volume", 00:10:51.491 "block_size": 512, 00:10:51.491 "num_blocks": 190464, 00:10:51.491 "uuid": "f33fea34-67ea-48ce-8fda-b39b4b2fa2e8", 00:10:51.491 "assigned_rate_limits": { 00:10:51.491 "rw_ios_per_sec": 0, 00:10:51.491 "rw_mbytes_per_sec": 0, 00:10:51.491 "r_mbytes_per_sec": 0, 00:10:51.491 "w_mbytes_per_sec": 0 00:10:51.491 }, 00:10:51.491 "claimed": false, 00:10:51.491 "zoned": false, 00:10:51.491 "supported_io_types": { 00:10:51.491 "read": true, 00:10:51.491 "write": true, 00:10:51.491 "unmap": true, 00:10:51.491 "flush": true, 00:10:51.491 "reset": true, 00:10:51.491 "nvme_admin": false, 00:10:51.491 "nvme_io": false, 00:10:51.491 "nvme_io_md": false, 00:10:51.491 "write_zeroes": true, 00:10:51.491 "zcopy": false, 00:10:51.491 "get_zone_info": false, 00:10:51.491 "zone_management": false, 00:10:51.491 "zone_append": false, 00:10:51.491 "compare": false, 00:10:51.491 "compare_and_write": false, 00:10:51.491 "abort": false, 00:10:51.491 "seek_hole": false, 00:10:51.491 "seek_data": false, 00:10:51.491 "copy": false, 00:10:51.491 "nvme_iov_md": false 00:10:51.491 }, 00:10:51.491 "memory_domains": [ 00:10:51.491 { 00:10:51.491 "dma_device_id": "system", 00:10:51.491 "dma_device_type": 1 00:10:51.491 }, 00:10:51.491 { 00:10:51.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.491 "dma_device_type": 2 00:10:51.491 }, 00:10:51.491 { 00:10:51.491 "dma_device_id": "system", 00:10:51.491 "dma_device_type": 1 00:10:51.491 }, 00:10:51.491 { 00:10:51.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.491 "dma_device_type": 2 00:10:51.491 }, 00:10:51.491 { 00:10:51.491 "dma_device_id": "system", 00:10:51.491 "dma_device_type": 1 00:10:51.491 }, 00:10:51.491 { 00:10:51.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.491 "dma_device_type": 2 00:10:51.491 } 00:10:51.491 ], 00:10:51.491 "driver_specific": { 00:10:51.491 "raid": { 00:10:51.491 "uuid": "f33fea34-67ea-48ce-8fda-b39b4b2fa2e8", 00:10:51.491 "strip_size_kb": 64, 00:10:51.491 "state": "online", 00:10:51.491 "raid_level": "raid0", 00:10:51.491 "superblock": true, 00:10:51.491 "num_base_bdevs": 3, 00:10:51.491 "num_base_bdevs_discovered": 3, 00:10:51.491 "num_base_bdevs_operational": 3, 00:10:51.491 "base_bdevs_list": [ 00:10:51.491 { 00:10:51.491 "name": "pt1", 00:10:51.491 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.491 "is_configured": true, 00:10:51.491 "data_offset": 2048, 00:10:51.491 "data_size": 63488 00:10:51.491 }, 00:10:51.491 { 00:10:51.491 "name": "pt2", 00:10:51.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.491 "is_configured": true, 00:10:51.491 "data_offset": 2048, 00:10:51.491 "data_size": 63488 00:10:51.491 }, 00:10:51.491 { 00:10:51.491 "name": "pt3", 00:10:51.491 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.491 "is_configured": true, 00:10:51.491 "data_offset": 2048, 00:10:51.491 "data_size": 63488 00:10:51.491 } 00:10:51.491 ] 00:10:51.491 } 00:10:51.491 } 00:10:51.491 }' 00:10:51.491 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.491 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:51.491 pt2 00:10:51.491 pt3' 00:10:51.491 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.491 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.491 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.491 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:51.491 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.491 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.491 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.752 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.752 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.752 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.752 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.752 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.752 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:51.752 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.752 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.752 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.752 [2024-12-05 20:03:53.045852] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f33fea34-67ea-48ce-8fda-b39b4b2fa2e8 '!=' f33fea34-67ea-48ce-8fda-b39b4b2fa2e8 ']' 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65183 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65183 ']' 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65183 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65183 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.752 killing process with pid 65183 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65183' 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65183 00:10:51.752 [2024-12-05 20:03:53.131594] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.752 [2024-12-05 20:03:53.131704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.752 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65183 00:10:51.752 [2024-12-05 20:03:53.131775] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.752 [2024-12-05 20:03:53.131788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:52.012 [2024-12-05 20:03:53.432923] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.396 20:03:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:53.396 00:10:53.396 real 0m5.313s 00:10:53.396 user 0m7.637s 00:10:53.396 sys 0m0.949s 00:10:53.396 20:03:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.396 20:03:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.396 ************************************ 00:10:53.396 END TEST raid_superblock_test 00:10:53.396 ************************************ 00:10:53.396 20:03:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:53.396 20:03:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:53.396 20:03:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.396 20:03:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.396 ************************************ 00:10:53.396 START TEST raid_read_error_test 00:10:53.396 ************************************ 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CQ2i6reSkz 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65436 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65436 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65436 ']' 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.396 20:03:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.396 [2024-12-05 20:03:54.738519] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:10:53.396 [2024-12-05 20:03:54.738722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65436 ] 00:10:53.656 [2024-12-05 20:03:54.914012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.656 [2024-12-05 20:03:55.030884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.916 [2024-12-05 20:03:55.228754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.916 [2024-12-05 20:03:55.228826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.183 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.183 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:54.183 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.183 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:54.183 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.183 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.455 BaseBdev1_malloc 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.455 true 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.455 [2024-12-05 20:03:55.637414] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:54.455 [2024-12-05 20:03:55.637539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.455 [2024-12-05 20:03:55.637569] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:54.455 [2024-12-05 20:03:55.637583] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.455 [2024-12-05 20:03:55.640151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.455 [2024-12-05 20:03:55.640199] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:54.455 BaseBdev1 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.455 BaseBdev2_malloc 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.455 true 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.455 [2024-12-05 20:03:55.704626] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:54.455 [2024-12-05 20:03:55.704688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.455 [2024-12-05 20:03:55.704707] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:54.455 [2024-12-05 20:03:55.704719] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.455 [2024-12-05 20:03:55.706958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.455 [2024-12-05 20:03:55.707063] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:54.455 BaseBdev2 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.455 BaseBdev3_malloc 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.455 true 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.455 [2024-12-05 20:03:55.781659] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:54.455 [2024-12-05 20:03:55.781716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.455 [2024-12-05 20:03:55.781734] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:54.455 [2024-12-05 20:03:55.781744] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.455 [2024-12-05 20:03:55.783949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.455 [2024-12-05 20:03:55.783989] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:54.455 BaseBdev3 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.455 [2024-12-05 20:03:55.793714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.455 [2024-12-05 20:03:55.795563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.455 [2024-12-05 20:03:55.795710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.455 [2024-12-05 20:03:55.795963] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:54.455 [2024-12-05 20:03:55.795981] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:54.455 [2024-12-05 20:03:55.796315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:54.455 [2024-12-05 20:03:55.796518] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:54.455 [2024-12-05 20:03:55.796536] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:54.455 [2024-12-05 20:03:55.796724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.455 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.455 "name": "raid_bdev1", 00:10:54.455 "uuid": "215423c1-a7ea-4c66-a8d3-93e94ca1f97b", 00:10:54.455 "strip_size_kb": 64, 00:10:54.455 "state": "online", 00:10:54.455 "raid_level": "raid0", 00:10:54.455 "superblock": true, 00:10:54.455 "num_base_bdevs": 3, 00:10:54.455 "num_base_bdevs_discovered": 3, 00:10:54.455 "num_base_bdevs_operational": 3, 00:10:54.455 "base_bdevs_list": [ 00:10:54.455 { 00:10:54.455 "name": "BaseBdev1", 00:10:54.455 "uuid": "4cf0b4cc-ad92-5dad-8517-63283a37351b", 00:10:54.455 "is_configured": true, 00:10:54.455 "data_offset": 2048, 00:10:54.455 "data_size": 63488 00:10:54.455 }, 00:10:54.455 { 00:10:54.455 "name": "BaseBdev2", 00:10:54.456 "uuid": "1c429515-20d6-58e2-ae84-510fec9714fa", 00:10:54.456 "is_configured": true, 00:10:54.456 "data_offset": 2048, 00:10:54.456 "data_size": 63488 00:10:54.456 }, 00:10:54.456 { 00:10:54.456 "name": "BaseBdev3", 00:10:54.456 "uuid": "97db45fe-dd48-5d52-8f07-9120eda12152", 00:10:54.456 "is_configured": true, 00:10:54.456 "data_offset": 2048, 00:10:54.456 "data_size": 63488 00:10:54.456 } 00:10:54.456 ] 00:10:54.456 }' 00:10:54.456 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.456 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.026 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:55.026 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:55.026 [2024-12-05 20:03:56.362059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.961 "name": "raid_bdev1", 00:10:55.961 "uuid": "215423c1-a7ea-4c66-a8d3-93e94ca1f97b", 00:10:55.961 "strip_size_kb": 64, 00:10:55.961 "state": "online", 00:10:55.961 "raid_level": "raid0", 00:10:55.961 "superblock": true, 00:10:55.961 "num_base_bdevs": 3, 00:10:55.961 "num_base_bdevs_discovered": 3, 00:10:55.961 "num_base_bdevs_operational": 3, 00:10:55.961 "base_bdevs_list": [ 00:10:55.961 { 00:10:55.961 "name": "BaseBdev1", 00:10:55.961 "uuid": "4cf0b4cc-ad92-5dad-8517-63283a37351b", 00:10:55.961 "is_configured": true, 00:10:55.961 "data_offset": 2048, 00:10:55.961 "data_size": 63488 00:10:55.961 }, 00:10:55.961 { 00:10:55.961 "name": "BaseBdev2", 00:10:55.961 "uuid": "1c429515-20d6-58e2-ae84-510fec9714fa", 00:10:55.961 "is_configured": true, 00:10:55.961 "data_offset": 2048, 00:10:55.961 "data_size": 63488 00:10:55.961 }, 00:10:55.961 { 00:10:55.961 "name": "BaseBdev3", 00:10:55.961 "uuid": "97db45fe-dd48-5d52-8f07-9120eda12152", 00:10:55.961 "is_configured": true, 00:10:55.961 "data_offset": 2048, 00:10:55.961 "data_size": 63488 00:10:55.961 } 00:10:55.961 ] 00:10:55.961 }' 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.961 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.552 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:56.552 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.552 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.552 [2024-12-05 20:03:57.714457] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:56.552 [2024-12-05 20:03:57.714491] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.552 [2024-12-05 20:03:57.717577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.552 [2024-12-05 20:03:57.717692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.552 [2024-12-05 20:03:57.717744] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.552 [2024-12-05 20:03:57.717756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:56.552 { 00:10:56.552 "results": [ 00:10:56.552 { 00:10:56.552 "job": "raid_bdev1", 00:10:56.552 "core_mask": "0x1", 00:10:56.552 "workload": "randrw", 00:10:56.552 "percentage": 50, 00:10:56.552 "status": "finished", 00:10:56.552 "queue_depth": 1, 00:10:56.552 "io_size": 131072, 00:10:56.552 "runtime": 1.353287, 00:10:56.552 "iops": 14890.411272701209, 00:10:56.552 "mibps": 1861.301409087651, 00:10:56.552 "io_failed": 1, 00:10:56.552 "io_timeout": 0, 00:10:56.552 "avg_latency_us": 93.08652390305296, 00:10:56.552 "min_latency_us": 27.053275109170304, 00:10:56.552 "max_latency_us": 1445.2262008733624 00:10:56.552 } 00:10:56.552 ], 00:10:56.552 "core_count": 1 00:10:56.552 } 00:10:56.552 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.552 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65436 00:10:56.552 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65436 ']' 00:10:56.552 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65436 00:10:56.552 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:56.552 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.552 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65436 00:10:56.552 killing process with pid 65436 00:10:56.552 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.552 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.552 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65436' 00:10:56.552 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65436 00:10:56.552 [2024-12-05 20:03:57.751722] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.552 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65436 00:10:56.552 [2024-12-05 20:03:57.985514] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.933 20:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:57.933 20:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CQ2i6reSkz 00:10:57.933 20:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:57.933 20:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:57.933 20:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:57.933 20:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.933 20:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:57.933 20:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:57.933 00:10:57.933 real 0m4.585s 00:10:57.933 user 0m5.459s 00:10:57.933 sys 0m0.515s 00:10:57.933 20:03:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.933 20:03:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.933 ************************************ 00:10:57.933 END TEST raid_read_error_test 00:10:57.933 ************************************ 00:10:57.933 20:03:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:57.933 20:03:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:57.933 20:03:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.933 20:03:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.933 ************************************ 00:10:57.933 START TEST raid_write_error_test 00:10:57.933 ************************************ 00:10:57.933 20:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:10:57.933 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:57.933 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:57.933 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:57.933 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:57.933 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.933 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:57.933 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.933 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.933 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:57.933 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.933 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.933 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:57.933 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.933 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.933 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:57.933 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gcDvhhE51G 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65582 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65582 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65582 ']' 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.934 20:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.193 [2024-12-05 20:03:59.390378] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:10:58.193 [2024-12-05 20:03:59.390624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65582 ] 00:10:58.193 [2024-12-05 20:03:59.566719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.451 [2024-12-05 20:03:59.686764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.710 [2024-12-05 20:03:59.896280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.710 [2024-12-05 20:03:59.896344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.969 BaseBdev1_malloc 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.969 true 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.969 [2024-12-05 20:04:00.310094] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:58.969 [2024-12-05 20:04:00.310223] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.969 [2024-12-05 20:04:00.310261] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:58.969 [2024-12-05 20:04:00.310272] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.969 [2024-12-05 20:04:00.312413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.969 [2024-12-05 20:04:00.312455] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:58.969 BaseBdev1 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.969 BaseBdev2_malloc 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.969 true 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.969 [2024-12-05 20:04:00.374680] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:58.969 [2024-12-05 20:04:00.374749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.969 [2024-12-05 20:04:00.374771] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:58.969 [2024-12-05 20:04:00.374782] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.969 [2024-12-05 20:04:00.377169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.969 [2024-12-05 20:04:00.377214] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:58.969 BaseBdev2 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.969 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.229 BaseBdev3_malloc 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.229 true 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.229 [2024-12-05 20:04:00.455739] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:59.229 [2024-12-05 20:04:00.455862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.229 [2024-12-05 20:04:00.455919] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:59.229 [2024-12-05 20:04:00.455933] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.229 [2024-12-05 20:04:00.458223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.229 [2024-12-05 20:04:00.458264] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:59.229 BaseBdev3 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.229 [2024-12-05 20:04:00.467782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.229 [2024-12-05 20:04:00.469686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.229 [2024-12-05 20:04:00.469760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.229 [2024-12-05 20:04:00.469972] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:59.229 [2024-12-05 20:04:00.469987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:59.229 [2024-12-05 20:04:00.470236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:59.229 [2024-12-05 20:04:00.470457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:59.229 [2024-12-05 20:04:00.470475] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:59.229 [2024-12-05 20:04:00.470622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.229 "name": "raid_bdev1", 00:10:59.229 "uuid": "247484ff-b92a-4e3f-8a1f-919342efdd50", 00:10:59.229 "strip_size_kb": 64, 00:10:59.229 "state": "online", 00:10:59.229 "raid_level": "raid0", 00:10:59.229 "superblock": true, 00:10:59.229 "num_base_bdevs": 3, 00:10:59.229 "num_base_bdevs_discovered": 3, 00:10:59.229 "num_base_bdevs_operational": 3, 00:10:59.229 "base_bdevs_list": [ 00:10:59.229 { 00:10:59.229 "name": "BaseBdev1", 00:10:59.229 "uuid": "243ff758-b6bb-5ce1-a200-ccdeed3bbbce", 00:10:59.229 "is_configured": true, 00:10:59.229 "data_offset": 2048, 00:10:59.229 "data_size": 63488 00:10:59.229 }, 00:10:59.229 { 00:10:59.229 "name": "BaseBdev2", 00:10:59.229 "uuid": "a5e7b793-e2ba-5aad-a89a-6f0ad1e5d8a2", 00:10:59.229 "is_configured": true, 00:10:59.229 "data_offset": 2048, 00:10:59.229 "data_size": 63488 00:10:59.229 }, 00:10:59.229 { 00:10:59.229 "name": "BaseBdev3", 00:10:59.229 "uuid": "fbce5d79-9b58-5404-a5d3-a5e9bbf03dea", 00:10:59.229 "is_configured": true, 00:10:59.229 "data_offset": 2048, 00:10:59.229 "data_size": 63488 00:10:59.229 } 00:10:59.229 ] 00:10:59.229 }' 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.229 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.491 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:59.491 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:59.768 [2024-12-05 20:04:01.012079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.704 "name": "raid_bdev1", 00:11:00.704 "uuid": "247484ff-b92a-4e3f-8a1f-919342efdd50", 00:11:00.704 "strip_size_kb": 64, 00:11:00.704 "state": "online", 00:11:00.704 "raid_level": "raid0", 00:11:00.704 "superblock": true, 00:11:00.704 "num_base_bdevs": 3, 00:11:00.704 "num_base_bdevs_discovered": 3, 00:11:00.704 "num_base_bdevs_operational": 3, 00:11:00.704 "base_bdevs_list": [ 00:11:00.704 { 00:11:00.704 "name": "BaseBdev1", 00:11:00.704 "uuid": "243ff758-b6bb-5ce1-a200-ccdeed3bbbce", 00:11:00.704 "is_configured": true, 00:11:00.704 "data_offset": 2048, 00:11:00.704 "data_size": 63488 00:11:00.704 }, 00:11:00.704 { 00:11:00.704 "name": "BaseBdev2", 00:11:00.704 "uuid": "a5e7b793-e2ba-5aad-a89a-6f0ad1e5d8a2", 00:11:00.704 "is_configured": true, 00:11:00.704 "data_offset": 2048, 00:11:00.704 "data_size": 63488 00:11:00.704 }, 00:11:00.704 { 00:11:00.704 "name": "BaseBdev3", 00:11:00.704 "uuid": "fbce5d79-9b58-5404-a5d3-a5e9bbf03dea", 00:11:00.704 "is_configured": true, 00:11:00.704 "data_offset": 2048, 00:11:00.704 "data_size": 63488 00:11:00.704 } 00:11:00.704 ] 00:11:00.704 }' 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.704 20:04:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.964 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:00.964 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.964 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.964 [2024-12-05 20:04:02.348688] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.964 [2024-12-05 20:04:02.348724] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.964 [2024-12-05 20:04:02.351882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.964 [2024-12-05 20:04:02.351952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.964 [2024-12-05 20:04:02.352000] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.964 [2024-12-05 20:04:02.352011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:00.964 { 00:11:00.964 "results": [ 00:11:00.964 { 00:11:00.964 "job": "raid_bdev1", 00:11:00.964 "core_mask": "0x1", 00:11:00.964 "workload": "randrw", 00:11:00.964 "percentage": 50, 00:11:00.964 "status": "finished", 00:11:00.964 "queue_depth": 1, 00:11:00.964 "io_size": 131072, 00:11:00.964 "runtime": 1.337268, 00:11:00.964 "iops": 14119.832374662372, 00:11:00.964 "mibps": 1764.9790468327965, 00:11:00.964 "io_failed": 1, 00:11:00.964 "io_timeout": 0, 00:11:00.964 "avg_latency_us": 98.13272750356307, 00:11:00.964 "min_latency_us": 23.699563318777294, 00:11:00.964 "max_latency_us": 1631.2454148471616 00:11:00.964 } 00:11:00.964 ], 00:11:00.964 "core_count": 1 00:11:00.964 } 00:11:00.964 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.964 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65582 00:11:00.964 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65582 ']' 00:11:00.964 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65582 00:11:00.964 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:00.964 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.964 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65582 00:11:00.964 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.964 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.964 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65582' 00:11:00.964 killing process with pid 65582 00:11:00.964 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65582 00:11:00.964 [2024-12-05 20:04:02.397771] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.964 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65582 00:11:01.531 [2024-12-05 20:04:02.666322] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:02.914 20:04:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gcDvhhE51G 00:11:02.914 20:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:02.914 20:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:02.914 20:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:02.914 20:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:02.914 ************************************ 00:11:02.914 END TEST raid_write_error_test 00:11:02.914 ************************************ 00:11:02.914 20:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:02.914 20:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:02.914 20:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:02.914 00:11:02.914 real 0m4.733s 00:11:02.914 user 0m5.566s 00:11:02.914 sys 0m0.585s 00:11:02.914 20:04:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.914 20:04:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.914 20:04:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:02.914 20:04:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:11:02.914 20:04:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:02.914 20:04:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.914 20:04:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.914 ************************************ 00:11:02.914 START TEST raid_state_function_test 00:11:02.914 ************************************ 00:11:02.914 20:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:11:02.914 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:02.914 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:02.914 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:02.914 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:02.914 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:02.914 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:02.914 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:02.914 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:02.914 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:02.914 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65725 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65725' 00:11:02.915 Process raid pid: 65725 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65725 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65725 ']' 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.915 20:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.915 [2024-12-05 20:04:04.190633] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:11:02.915 [2024-12-05 20:04:04.190829] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.174 [2024-12-05 20:04:04.351679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.174 [2024-12-05 20:04:04.480132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.434 [2024-12-05 20:04:04.718458] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.434 [2024-12-05 20:04:04.718601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.695 [2024-12-05 20:04:05.115508] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:03.695 [2024-12-05 20:04:05.115623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:03.695 [2024-12-05 20:04:05.115662] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.695 [2024-12-05 20:04:05.115690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.695 [2024-12-05 20:04:05.115718] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.695 [2024-12-05 20:04:05.115743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.695 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.954 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.954 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.954 "name": "Existed_Raid", 00:11:03.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.954 "strip_size_kb": 64, 00:11:03.954 "state": "configuring", 00:11:03.954 "raid_level": "concat", 00:11:03.954 "superblock": false, 00:11:03.954 "num_base_bdevs": 3, 00:11:03.954 "num_base_bdevs_discovered": 0, 00:11:03.954 "num_base_bdevs_operational": 3, 00:11:03.954 "base_bdevs_list": [ 00:11:03.954 { 00:11:03.954 "name": "BaseBdev1", 00:11:03.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.954 "is_configured": false, 00:11:03.954 "data_offset": 0, 00:11:03.954 "data_size": 0 00:11:03.954 }, 00:11:03.954 { 00:11:03.954 "name": "BaseBdev2", 00:11:03.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.954 "is_configured": false, 00:11:03.954 "data_offset": 0, 00:11:03.954 "data_size": 0 00:11:03.954 }, 00:11:03.954 { 00:11:03.954 "name": "BaseBdev3", 00:11:03.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.954 "is_configured": false, 00:11:03.954 "data_offset": 0, 00:11:03.954 "data_size": 0 00:11:03.954 } 00:11:03.954 ] 00:11:03.954 }' 00:11:03.954 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.954 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.212 [2024-12-05 20:04:05.574709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:04.212 [2024-12-05 20:04:05.574798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.212 [2024-12-05 20:04:05.586672] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.212 [2024-12-05 20:04:05.586734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.212 [2024-12-05 20:04:05.586744] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.212 [2024-12-05 20:04:05.586755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.212 [2024-12-05 20:04:05.586762] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:04.212 [2024-12-05 20:04:05.586772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.212 BaseBdev1 00:11:04.212 [2024-12-05 20:04:05.638887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.212 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.471 [ 00:11:04.471 { 00:11:04.471 "name": "BaseBdev1", 00:11:04.471 "aliases": [ 00:11:04.471 "f1e775c8-ad9b-4e0b-8485-80b9b37cfd00" 00:11:04.471 ], 00:11:04.471 "product_name": "Malloc disk", 00:11:04.471 "block_size": 512, 00:11:04.471 "num_blocks": 65536, 00:11:04.471 "uuid": "f1e775c8-ad9b-4e0b-8485-80b9b37cfd00", 00:11:04.471 "assigned_rate_limits": { 00:11:04.471 "rw_ios_per_sec": 0, 00:11:04.471 "rw_mbytes_per_sec": 0, 00:11:04.471 "r_mbytes_per_sec": 0, 00:11:04.471 "w_mbytes_per_sec": 0 00:11:04.471 }, 00:11:04.471 "claimed": true, 00:11:04.471 "claim_type": "exclusive_write", 00:11:04.471 "zoned": false, 00:11:04.471 "supported_io_types": { 00:11:04.471 "read": true, 00:11:04.471 "write": true, 00:11:04.471 "unmap": true, 00:11:04.471 "flush": true, 00:11:04.471 "reset": true, 00:11:04.471 "nvme_admin": false, 00:11:04.471 "nvme_io": false, 00:11:04.471 "nvme_io_md": false, 00:11:04.471 "write_zeroes": true, 00:11:04.471 "zcopy": true, 00:11:04.471 "get_zone_info": false, 00:11:04.471 "zone_management": false, 00:11:04.471 "zone_append": false, 00:11:04.471 "compare": false, 00:11:04.471 "compare_and_write": false, 00:11:04.471 "abort": true, 00:11:04.471 "seek_hole": false, 00:11:04.471 "seek_data": false, 00:11:04.471 "copy": true, 00:11:04.471 "nvme_iov_md": false 00:11:04.471 }, 00:11:04.471 "memory_domains": [ 00:11:04.471 { 00:11:04.471 "dma_device_id": "system", 00:11:04.471 "dma_device_type": 1 00:11:04.471 }, 00:11:04.471 { 00:11:04.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.471 "dma_device_type": 2 00:11:04.471 } 00:11:04.471 ], 00:11:04.471 "driver_specific": {} 00:11:04.471 } 00:11:04.471 ] 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.471 "name": "Existed_Raid", 00:11:04.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.471 "strip_size_kb": 64, 00:11:04.471 "state": "configuring", 00:11:04.471 "raid_level": "concat", 00:11:04.471 "superblock": false, 00:11:04.471 "num_base_bdevs": 3, 00:11:04.471 "num_base_bdevs_discovered": 1, 00:11:04.471 "num_base_bdevs_operational": 3, 00:11:04.471 "base_bdevs_list": [ 00:11:04.471 { 00:11:04.471 "name": "BaseBdev1", 00:11:04.471 "uuid": "f1e775c8-ad9b-4e0b-8485-80b9b37cfd00", 00:11:04.471 "is_configured": true, 00:11:04.471 "data_offset": 0, 00:11:04.471 "data_size": 65536 00:11:04.471 }, 00:11:04.471 { 00:11:04.471 "name": "BaseBdev2", 00:11:04.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.471 "is_configured": false, 00:11:04.471 "data_offset": 0, 00:11:04.471 "data_size": 0 00:11:04.471 }, 00:11:04.471 { 00:11:04.471 "name": "BaseBdev3", 00:11:04.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.471 "is_configured": false, 00:11:04.471 "data_offset": 0, 00:11:04.471 "data_size": 0 00:11:04.471 } 00:11:04.471 ] 00:11:04.471 }' 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.471 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.731 [2024-12-05 20:04:06.122168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:04.731 [2024-12-05 20:04:06.122227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.731 [2024-12-05 20:04:06.134190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.731 [2024-12-05 20:04:06.136232] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.731 [2024-12-05 20:04:06.136321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.731 [2024-12-05 20:04:06.136363] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:04.731 [2024-12-05 20:04:06.136392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.990 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.990 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.990 "name": "Existed_Raid", 00:11:04.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.990 "strip_size_kb": 64, 00:11:04.990 "state": "configuring", 00:11:04.990 "raid_level": "concat", 00:11:04.990 "superblock": false, 00:11:04.990 "num_base_bdevs": 3, 00:11:04.990 "num_base_bdevs_discovered": 1, 00:11:04.990 "num_base_bdevs_operational": 3, 00:11:04.990 "base_bdevs_list": [ 00:11:04.990 { 00:11:04.990 "name": "BaseBdev1", 00:11:04.990 "uuid": "f1e775c8-ad9b-4e0b-8485-80b9b37cfd00", 00:11:04.990 "is_configured": true, 00:11:04.990 "data_offset": 0, 00:11:04.990 "data_size": 65536 00:11:04.990 }, 00:11:04.990 { 00:11:04.990 "name": "BaseBdev2", 00:11:04.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.990 "is_configured": false, 00:11:04.990 "data_offset": 0, 00:11:04.990 "data_size": 0 00:11:04.990 }, 00:11:04.990 { 00:11:04.990 "name": "BaseBdev3", 00:11:04.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.990 "is_configured": false, 00:11:04.990 "data_offset": 0, 00:11:04.990 "data_size": 0 00:11:04.990 } 00:11:04.990 ] 00:11:04.990 }' 00:11:04.990 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.990 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.251 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:05.251 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.251 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.251 [2024-12-05 20:04:06.655102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.251 BaseBdev2 00:11:05.251 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.251 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:05.251 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:05.251 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.251 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:05.251 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.251 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.251 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.251 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.251 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.251 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.251 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:05.251 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.251 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.251 [ 00:11:05.251 { 00:11:05.251 "name": "BaseBdev2", 00:11:05.251 "aliases": [ 00:11:05.251 "710f93a4-837a-4230-aa9f-d8b393b47019" 00:11:05.251 ], 00:11:05.251 "product_name": "Malloc disk", 00:11:05.251 "block_size": 512, 00:11:05.251 "num_blocks": 65536, 00:11:05.251 "uuid": "710f93a4-837a-4230-aa9f-d8b393b47019", 00:11:05.251 "assigned_rate_limits": { 00:11:05.251 "rw_ios_per_sec": 0, 00:11:05.251 "rw_mbytes_per_sec": 0, 00:11:05.251 "r_mbytes_per_sec": 0, 00:11:05.251 "w_mbytes_per_sec": 0 00:11:05.251 }, 00:11:05.251 "claimed": true, 00:11:05.251 "claim_type": "exclusive_write", 00:11:05.251 "zoned": false, 00:11:05.251 "supported_io_types": { 00:11:05.251 "read": true, 00:11:05.251 "write": true, 00:11:05.251 "unmap": true, 00:11:05.512 "flush": true, 00:11:05.512 "reset": true, 00:11:05.512 "nvme_admin": false, 00:11:05.512 "nvme_io": false, 00:11:05.512 "nvme_io_md": false, 00:11:05.512 "write_zeroes": true, 00:11:05.512 "zcopy": true, 00:11:05.512 "get_zone_info": false, 00:11:05.512 "zone_management": false, 00:11:05.512 "zone_append": false, 00:11:05.512 "compare": false, 00:11:05.512 "compare_and_write": false, 00:11:05.512 "abort": true, 00:11:05.512 "seek_hole": false, 00:11:05.512 "seek_data": false, 00:11:05.512 "copy": true, 00:11:05.512 "nvme_iov_md": false 00:11:05.512 }, 00:11:05.512 "memory_domains": [ 00:11:05.512 { 00:11:05.512 "dma_device_id": "system", 00:11:05.512 "dma_device_type": 1 00:11:05.512 }, 00:11:05.512 { 00:11:05.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.512 "dma_device_type": 2 00:11:05.512 } 00:11:05.512 ], 00:11:05.512 "driver_specific": {} 00:11:05.512 } 00:11:05.512 ] 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.512 "name": "Existed_Raid", 00:11:05.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.512 "strip_size_kb": 64, 00:11:05.512 "state": "configuring", 00:11:05.512 "raid_level": "concat", 00:11:05.512 "superblock": false, 00:11:05.512 "num_base_bdevs": 3, 00:11:05.512 "num_base_bdevs_discovered": 2, 00:11:05.512 "num_base_bdevs_operational": 3, 00:11:05.512 "base_bdevs_list": [ 00:11:05.512 { 00:11:05.512 "name": "BaseBdev1", 00:11:05.512 "uuid": "f1e775c8-ad9b-4e0b-8485-80b9b37cfd00", 00:11:05.512 "is_configured": true, 00:11:05.512 "data_offset": 0, 00:11:05.512 "data_size": 65536 00:11:05.512 }, 00:11:05.512 { 00:11:05.512 "name": "BaseBdev2", 00:11:05.512 "uuid": "710f93a4-837a-4230-aa9f-d8b393b47019", 00:11:05.512 "is_configured": true, 00:11:05.512 "data_offset": 0, 00:11:05.512 "data_size": 65536 00:11:05.512 }, 00:11:05.512 { 00:11:05.512 "name": "BaseBdev3", 00:11:05.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.512 "is_configured": false, 00:11:05.512 "data_offset": 0, 00:11:05.512 "data_size": 0 00:11:05.512 } 00:11:05.512 ] 00:11:05.512 }' 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.512 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.772 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:05.772 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.772 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.031 [2024-12-05 20:04:07.216537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.031 [2024-12-05 20:04:07.216680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:06.031 [2024-12-05 20:04:07.216717] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:06.031 [2024-12-05 20:04:07.217062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:06.031 [2024-12-05 20:04:07.217301] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:06.031 [2024-12-05 20:04:07.217350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:06.031 [2024-12-05 20:04:07.217659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.031 BaseBdev3 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.031 [ 00:11:06.031 { 00:11:06.031 "name": "BaseBdev3", 00:11:06.031 "aliases": [ 00:11:06.031 "5a65dce5-256d-4257-ad55-715e75874b04" 00:11:06.031 ], 00:11:06.031 "product_name": "Malloc disk", 00:11:06.031 "block_size": 512, 00:11:06.031 "num_blocks": 65536, 00:11:06.031 "uuid": "5a65dce5-256d-4257-ad55-715e75874b04", 00:11:06.031 "assigned_rate_limits": { 00:11:06.031 "rw_ios_per_sec": 0, 00:11:06.031 "rw_mbytes_per_sec": 0, 00:11:06.031 "r_mbytes_per_sec": 0, 00:11:06.031 "w_mbytes_per_sec": 0 00:11:06.031 }, 00:11:06.031 "claimed": true, 00:11:06.031 "claim_type": "exclusive_write", 00:11:06.031 "zoned": false, 00:11:06.031 "supported_io_types": { 00:11:06.031 "read": true, 00:11:06.031 "write": true, 00:11:06.031 "unmap": true, 00:11:06.031 "flush": true, 00:11:06.031 "reset": true, 00:11:06.031 "nvme_admin": false, 00:11:06.031 "nvme_io": false, 00:11:06.031 "nvme_io_md": false, 00:11:06.031 "write_zeroes": true, 00:11:06.031 "zcopy": true, 00:11:06.031 "get_zone_info": false, 00:11:06.031 "zone_management": false, 00:11:06.031 "zone_append": false, 00:11:06.031 "compare": false, 00:11:06.031 "compare_and_write": false, 00:11:06.031 "abort": true, 00:11:06.031 "seek_hole": false, 00:11:06.031 "seek_data": false, 00:11:06.031 "copy": true, 00:11:06.031 "nvme_iov_md": false 00:11:06.031 }, 00:11:06.031 "memory_domains": [ 00:11:06.031 { 00:11:06.031 "dma_device_id": "system", 00:11:06.031 "dma_device_type": 1 00:11:06.031 }, 00:11:06.031 { 00:11:06.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.031 "dma_device_type": 2 00:11:06.031 } 00:11:06.031 ], 00:11:06.031 "driver_specific": {} 00:11:06.031 } 00:11:06.031 ] 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.031 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.032 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.032 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.032 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.032 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.032 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.032 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.032 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.032 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.032 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.032 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.032 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.032 "name": "Existed_Raid", 00:11:06.032 "uuid": "0b416f9b-6b24-4deb-b478-842f24540c95", 00:11:06.032 "strip_size_kb": 64, 00:11:06.032 "state": "online", 00:11:06.032 "raid_level": "concat", 00:11:06.032 "superblock": false, 00:11:06.032 "num_base_bdevs": 3, 00:11:06.032 "num_base_bdevs_discovered": 3, 00:11:06.032 "num_base_bdevs_operational": 3, 00:11:06.032 "base_bdevs_list": [ 00:11:06.032 { 00:11:06.032 "name": "BaseBdev1", 00:11:06.032 "uuid": "f1e775c8-ad9b-4e0b-8485-80b9b37cfd00", 00:11:06.032 "is_configured": true, 00:11:06.032 "data_offset": 0, 00:11:06.032 "data_size": 65536 00:11:06.032 }, 00:11:06.032 { 00:11:06.032 "name": "BaseBdev2", 00:11:06.032 "uuid": "710f93a4-837a-4230-aa9f-d8b393b47019", 00:11:06.032 "is_configured": true, 00:11:06.032 "data_offset": 0, 00:11:06.032 "data_size": 65536 00:11:06.032 }, 00:11:06.032 { 00:11:06.032 "name": "BaseBdev3", 00:11:06.032 "uuid": "5a65dce5-256d-4257-ad55-715e75874b04", 00:11:06.032 "is_configured": true, 00:11:06.032 "data_offset": 0, 00:11:06.032 "data_size": 65536 00:11:06.032 } 00:11:06.032 ] 00:11:06.032 }' 00:11:06.032 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.032 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.291 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:06.291 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:06.291 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:06.291 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:06.291 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:06.291 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:06.291 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:06.291 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:06.291 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.291 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.291 [2024-12-05 20:04:07.720335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.552 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.552 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:06.552 "name": "Existed_Raid", 00:11:06.552 "aliases": [ 00:11:06.552 "0b416f9b-6b24-4deb-b478-842f24540c95" 00:11:06.552 ], 00:11:06.552 "product_name": "Raid Volume", 00:11:06.552 "block_size": 512, 00:11:06.552 "num_blocks": 196608, 00:11:06.552 "uuid": "0b416f9b-6b24-4deb-b478-842f24540c95", 00:11:06.552 "assigned_rate_limits": { 00:11:06.552 "rw_ios_per_sec": 0, 00:11:06.552 "rw_mbytes_per_sec": 0, 00:11:06.552 "r_mbytes_per_sec": 0, 00:11:06.552 "w_mbytes_per_sec": 0 00:11:06.552 }, 00:11:06.552 "claimed": false, 00:11:06.552 "zoned": false, 00:11:06.552 "supported_io_types": { 00:11:06.552 "read": true, 00:11:06.552 "write": true, 00:11:06.552 "unmap": true, 00:11:06.552 "flush": true, 00:11:06.552 "reset": true, 00:11:06.552 "nvme_admin": false, 00:11:06.552 "nvme_io": false, 00:11:06.552 "nvme_io_md": false, 00:11:06.552 "write_zeroes": true, 00:11:06.552 "zcopy": false, 00:11:06.552 "get_zone_info": false, 00:11:06.552 "zone_management": false, 00:11:06.552 "zone_append": false, 00:11:06.552 "compare": false, 00:11:06.552 "compare_and_write": false, 00:11:06.552 "abort": false, 00:11:06.552 "seek_hole": false, 00:11:06.552 "seek_data": false, 00:11:06.552 "copy": false, 00:11:06.552 "nvme_iov_md": false 00:11:06.552 }, 00:11:06.552 "memory_domains": [ 00:11:06.552 { 00:11:06.552 "dma_device_id": "system", 00:11:06.552 "dma_device_type": 1 00:11:06.552 }, 00:11:06.552 { 00:11:06.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.552 "dma_device_type": 2 00:11:06.552 }, 00:11:06.552 { 00:11:06.552 "dma_device_id": "system", 00:11:06.552 "dma_device_type": 1 00:11:06.552 }, 00:11:06.552 { 00:11:06.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.552 "dma_device_type": 2 00:11:06.552 }, 00:11:06.552 { 00:11:06.552 "dma_device_id": "system", 00:11:06.552 "dma_device_type": 1 00:11:06.552 }, 00:11:06.552 { 00:11:06.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.552 "dma_device_type": 2 00:11:06.552 } 00:11:06.552 ], 00:11:06.552 "driver_specific": { 00:11:06.552 "raid": { 00:11:06.552 "uuid": "0b416f9b-6b24-4deb-b478-842f24540c95", 00:11:06.552 "strip_size_kb": 64, 00:11:06.552 "state": "online", 00:11:06.552 "raid_level": "concat", 00:11:06.552 "superblock": false, 00:11:06.552 "num_base_bdevs": 3, 00:11:06.552 "num_base_bdevs_discovered": 3, 00:11:06.552 "num_base_bdevs_operational": 3, 00:11:06.552 "base_bdevs_list": [ 00:11:06.552 { 00:11:06.552 "name": "BaseBdev1", 00:11:06.552 "uuid": "f1e775c8-ad9b-4e0b-8485-80b9b37cfd00", 00:11:06.552 "is_configured": true, 00:11:06.552 "data_offset": 0, 00:11:06.552 "data_size": 65536 00:11:06.552 }, 00:11:06.552 { 00:11:06.552 "name": "BaseBdev2", 00:11:06.552 "uuid": "710f93a4-837a-4230-aa9f-d8b393b47019", 00:11:06.552 "is_configured": true, 00:11:06.552 "data_offset": 0, 00:11:06.552 "data_size": 65536 00:11:06.552 }, 00:11:06.552 { 00:11:06.552 "name": "BaseBdev3", 00:11:06.552 "uuid": "5a65dce5-256d-4257-ad55-715e75874b04", 00:11:06.552 "is_configured": true, 00:11:06.552 "data_offset": 0, 00:11:06.552 "data_size": 65536 00:11:06.552 } 00:11:06.552 ] 00:11:06.552 } 00:11:06.552 } 00:11:06.552 }' 00:11:06.552 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:06.552 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:06.552 BaseBdev2 00:11:06.552 BaseBdev3' 00:11:06.552 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.552 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.553 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.553 [2024-12-05 20:04:07.979619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:06.553 [2024-12-05 20:04:07.979736] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.553 [2024-12-05 20:04:07.979827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.832 "name": "Existed_Raid", 00:11:06.832 "uuid": "0b416f9b-6b24-4deb-b478-842f24540c95", 00:11:06.832 "strip_size_kb": 64, 00:11:06.832 "state": "offline", 00:11:06.832 "raid_level": "concat", 00:11:06.832 "superblock": false, 00:11:06.832 "num_base_bdevs": 3, 00:11:06.832 "num_base_bdevs_discovered": 2, 00:11:06.832 "num_base_bdevs_operational": 2, 00:11:06.832 "base_bdevs_list": [ 00:11:06.832 { 00:11:06.832 "name": null, 00:11:06.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.832 "is_configured": false, 00:11:06.832 "data_offset": 0, 00:11:06.832 "data_size": 65536 00:11:06.832 }, 00:11:06.832 { 00:11:06.832 "name": "BaseBdev2", 00:11:06.832 "uuid": "710f93a4-837a-4230-aa9f-d8b393b47019", 00:11:06.832 "is_configured": true, 00:11:06.832 "data_offset": 0, 00:11:06.832 "data_size": 65536 00:11:06.832 }, 00:11:06.832 { 00:11:06.832 "name": "BaseBdev3", 00:11:06.832 "uuid": "5a65dce5-256d-4257-ad55-715e75874b04", 00:11:06.832 "is_configured": true, 00:11:06.832 "data_offset": 0, 00:11:06.832 "data_size": 65536 00:11:06.832 } 00:11:06.832 ] 00:11:06.832 }' 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.832 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.412 [2024-12-05 20:04:08.659172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.412 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.412 [2024-12-05 20:04:08.830081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:07.412 [2024-12-05 20:04:08.830209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:07.672 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.672 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:07.672 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:07.672 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.672 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:07.672 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.672 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.672 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.672 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:07.672 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:07.672 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:07.672 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:07.672 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.672 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:07.672 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.672 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.672 BaseBdev2 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.672 [ 00:11:07.672 { 00:11:07.672 "name": "BaseBdev2", 00:11:07.672 "aliases": [ 00:11:07.672 "debd94c6-a635-4cf6-863c-a819c2285093" 00:11:07.672 ], 00:11:07.672 "product_name": "Malloc disk", 00:11:07.672 "block_size": 512, 00:11:07.672 "num_blocks": 65536, 00:11:07.672 "uuid": "debd94c6-a635-4cf6-863c-a819c2285093", 00:11:07.672 "assigned_rate_limits": { 00:11:07.672 "rw_ios_per_sec": 0, 00:11:07.672 "rw_mbytes_per_sec": 0, 00:11:07.672 "r_mbytes_per_sec": 0, 00:11:07.672 "w_mbytes_per_sec": 0 00:11:07.672 }, 00:11:07.672 "claimed": false, 00:11:07.672 "zoned": false, 00:11:07.672 "supported_io_types": { 00:11:07.672 "read": true, 00:11:07.672 "write": true, 00:11:07.672 "unmap": true, 00:11:07.672 "flush": true, 00:11:07.672 "reset": true, 00:11:07.672 "nvme_admin": false, 00:11:07.672 "nvme_io": false, 00:11:07.672 "nvme_io_md": false, 00:11:07.672 "write_zeroes": true, 00:11:07.672 "zcopy": true, 00:11:07.672 "get_zone_info": false, 00:11:07.672 "zone_management": false, 00:11:07.672 "zone_append": false, 00:11:07.672 "compare": false, 00:11:07.672 "compare_and_write": false, 00:11:07.672 "abort": true, 00:11:07.672 "seek_hole": false, 00:11:07.672 "seek_data": false, 00:11:07.672 "copy": true, 00:11:07.672 "nvme_iov_md": false 00:11:07.672 }, 00:11:07.672 "memory_domains": [ 00:11:07.672 { 00:11:07.672 "dma_device_id": "system", 00:11:07.672 "dma_device_type": 1 00:11:07.672 }, 00:11:07.672 { 00:11:07.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.672 "dma_device_type": 2 00:11:07.672 } 00:11:07.672 ], 00:11:07.672 "driver_specific": {} 00:11:07.672 } 00:11:07.672 ] 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.672 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.932 BaseBdev3 00:11:07.932 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.932 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:07.932 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.933 [ 00:11:07.933 { 00:11:07.933 "name": "BaseBdev3", 00:11:07.933 "aliases": [ 00:11:07.933 "d22c546b-d21e-406d-9e09-55699e020a80" 00:11:07.933 ], 00:11:07.933 "product_name": "Malloc disk", 00:11:07.933 "block_size": 512, 00:11:07.933 "num_blocks": 65536, 00:11:07.933 "uuid": "d22c546b-d21e-406d-9e09-55699e020a80", 00:11:07.933 "assigned_rate_limits": { 00:11:07.933 "rw_ios_per_sec": 0, 00:11:07.933 "rw_mbytes_per_sec": 0, 00:11:07.933 "r_mbytes_per_sec": 0, 00:11:07.933 "w_mbytes_per_sec": 0 00:11:07.933 }, 00:11:07.933 "claimed": false, 00:11:07.933 "zoned": false, 00:11:07.933 "supported_io_types": { 00:11:07.933 "read": true, 00:11:07.933 "write": true, 00:11:07.933 "unmap": true, 00:11:07.933 "flush": true, 00:11:07.933 "reset": true, 00:11:07.933 "nvme_admin": false, 00:11:07.933 "nvme_io": false, 00:11:07.933 "nvme_io_md": false, 00:11:07.933 "write_zeroes": true, 00:11:07.933 "zcopy": true, 00:11:07.933 "get_zone_info": false, 00:11:07.933 "zone_management": false, 00:11:07.933 "zone_append": false, 00:11:07.933 "compare": false, 00:11:07.933 "compare_and_write": false, 00:11:07.933 "abort": true, 00:11:07.933 "seek_hole": false, 00:11:07.933 "seek_data": false, 00:11:07.933 "copy": true, 00:11:07.933 "nvme_iov_md": false 00:11:07.933 }, 00:11:07.933 "memory_domains": [ 00:11:07.933 { 00:11:07.933 "dma_device_id": "system", 00:11:07.933 "dma_device_type": 1 00:11:07.933 }, 00:11:07.933 { 00:11:07.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.933 "dma_device_type": 2 00:11:07.933 } 00:11:07.933 ], 00:11:07.933 "driver_specific": {} 00:11:07.933 } 00:11:07.933 ] 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.933 [2024-12-05 20:04:09.174656] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:07.933 [2024-12-05 20:04:09.174769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:07.933 [2024-12-05 20:04:09.174808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:07.933 [2024-12-05 20:04:09.176955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.933 "name": "Existed_Raid", 00:11:07.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.933 "strip_size_kb": 64, 00:11:07.933 "state": "configuring", 00:11:07.933 "raid_level": "concat", 00:11:07.933 "superblock": false, 00:11:07.933 "num_base_bdevs": 3, 00:11:07.933 "num_base_bdevs_discovered": 2, 00:11:07.933 "num_base_bdevs_operational": 3, 00:11:07.933 "base_bdevs_list": [ 00:11:07.933 { 00:11:07.933 "name": "BaseBdev1", 00:11:07.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.933 "is_configured": false, 00:11:07.933 "data_offset": 0, 00:11:07.933 "data_size": 0 00:11:07.933 }, 00:11:07.933 { 00:11:07.933 "name": "BaseBdev2", 00:11:07.933 "uuid": "debd94c6-a635-4cf6-863c-a819c2285093", 00:11:07.933 "is_configured": true, 00:11:07.933 "data_offset": 0, 00:11:07.933 "data_size": 65536 00:11:07.933 }, 00:11:07.933 { 00:11:07.933 "name": "BaseBdev3", 00:11:07.933 "uuid": "d22c546b-d21e-406d-9e09-55699e020a80", 00:11:07.933 "is_configured": true, 00:11:07.933 "data_offset": 0, 00:11:07.933 "data_size": 65536 00:11:07.933 } 00:11:07.933 ] 00:11:07.933 }' 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.933 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.501 [2024-12-05 20:04:09.665842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.501 "name": "Existed_Raid", 00:11:08.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.501 "strip_size_kb": 64, 00:11:08.501 "state": "configuring", 00:11:08.501 "raid_level": "concat", 00:11:08.501 "superblock": false, 00:11:08.501 "num_base_bdevs": 3, 00:11:08.501 "num_base_bdevs_discovered": 1, 00:11:08.501 "num_base_bdevs_operational": 3, 00:11:08.501 "base_bdevs_list": [ 00:11:08.501 { 00:11:08.501 "name": "BaseBdev1", 00:11:08.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.501 "is_configured": false, 00:11:08.501 "data_offset": 0, 00:11:08.501 "data_size": 0 00:11:08.501 }, 00:11:08.501 { 00:11:08.501 "name": null, 00:11:08.501 "uuid": "debd94c6-a635-4cf6-863c-a819c2285093", 00:11:08.501 "is_configured": false, 00:11:08.501 "data_offset": 0, 00:11:08.501 "data_size": 65536 00:11:08.501 }, 00:11:08.501 { 00:11:08.501 "name": "BaseBdev3", 00:11:08.501 "uuid": "d22c546b-d21e-406d-9e09-55699e020a80", 00:11:08.501 "is_configured": true, 00:11:08.501 "data_offset": 0, 00:11:08.501 "data_size": 65536 00:11:08.501 } 00:11:08.501 ] 00:11:08.501 }' 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.501 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.800 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.800 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:08.800 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.800 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.800 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.800 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:08.800 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:08.800 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.800 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.058 [2024-12-05 20:04:10.248190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.058 BaseBdev1 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.058 [ 00:11:09.058 { 00:11:09.058 "name": "BaseBdev1", 00:11:09.058 "aliases": [ 00:11:09.058 "29ad7d4d-fd5e-4358-a806-af521d4c66c6" 00:11:09.058 ], 00:11:09.058 "product_name": "Malloc disk", 00:11:09.058 "block_size": 512, 00:11:09.058 "num_blocks": 65536, 00:11:09.058 "uuid": "29ad7d4d-fd5e-4358-a806-af521d4c66c6", 00:11:09.058 "assigned_rate_limits": { 00:11:09.058 "rw_ios_per_sec": 0, 00:11:09.058 "rw_mbytes_per_sec": 0, 00:11:09.058 "r_mbytes_per_sec": 0, 00:11:09.058 "w_mbytes_per_sec": 0 00:11:09.058 }, 00:11:09.058 "claimed": true, 00:11:09.058 "claim_type": "exclusive_write", 00:11:09.058 "zoned": false, 00:11:09.058 "supported_io_types": { 00:11:09.058 "read": true, 00:11:09.058 "write": true, 00:11:09.058 "unmap": true, 00:11:09.058 "flush": true, 00:11:09.058 "reset": true, 00:11:09.058 "nvme_admin": false, 00:11:09.058 "nvme_io": false, 00:11:09.058 "nvme_io_md": false, 00:11:09.058 "write_zeroes": true, 00:11:09.058 "zcopy": true, 00:11:09.058 "get_zone_info": false, 00:11:09.058 "zone_management": false, 00:11:09.058 "zone_append": false, 00:11:09.058 "compare": false, 00:11:09.058 "compare_and_write": false, 00:11:09.058 "abort": true, 00:11:09.058 "seek_hole": false, 00:11:09.058 "seek_data": false, 00:11:09.058 "copy": true, 00:11:09.058 "nvme_iov_md": false 00:11:09.058 }, 00:11:09.058 "memory_domains": [ 00:11:09.058 { 00:11:09.058 "dma_device_id": "system", 00:11:09.058 "dma_device_type": 1 00:11:09.058 }, 00:11:09.058 { 00:11:09.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.058 "dma_device_type": 2 00:11:09.058 } 00:11:09.058 ], 00:11:09.058 "driver_specific": {} 00:11:09.058 } 00:11:09.058 ] 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.058 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.058 "name": "Existed_Raid", 00:11:09.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.058 "strip_size_kb": 64, 00:11:09.058 "state": "configuring", 00:11:09.058 "raid_level": "concat", 00:11:09.058 "superblock": false, 00:11:09.058 "num_base_bdevs": 3, 00:11:09.058 "num_base_bdevs_discovered": 2, 00:11:09.058 "num_base_bdevs_operational": 3, 00:11:09.058 "base_bdevs_list": [ 00:11:09.058 { 00:11:09.058 "name": "BaseBdev1", 00:11:09.058 "uuid": "29ad7d4d-fd5e-4358-a806-af521d4c66c6", 00:11:09.058 "is_configured": true, 00:11:09.058 "data_offset": 0, 00:11:09.058 "data_size": 65536 00:11:09.058 }, 00:11:09.058 { 00:11:09.058 "name": null, 00:11:09.058 "uuid": "debd94c6-a635-4cf6-863c-a819c2285093", 00:11:09.058 "is_configured": false, 00:11:09.058 "data_offset": 0, 00:11:09.058 "data_size": 65536 00:11:09.058 }, 00:11:09.058 { 00:11:09.058 "name": "BaseBdev3", 00:11:09.058 "uuid": "d22c546b-d21e-406d-9e09-55699e020a80", 00:11:09.058 "is_configured": true, 00:11:09.058 "data_offset": 0, 00:11:09.058 "data_size": 65536 00:11:09.058 } 00:11:09.059 ] 00:11:09.059 }' 00:11:09.059 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.059 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.317 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.317 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.317 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.317 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:09.317 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.577 [2024-12-05 20:04:10.787341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.577 "name": "Existed_Raid", 00:11:09.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.577 "strip_size_kb": 64, 00:11:09.577 "state": "configuring", 00:11:09.577 "raid_level": "concat", 00:11:09.577 "superblock": false, 00:11:09.577 "num_base_bdevs": 3, 00:11:09.577 "num_base_bdevs_discovered": 1, 00:11:09.577 "num_base_bdevs_operational": 3, 00:11:09.577 "base_bdevs_list": [ 00:11:09.577 { 00:11:09.577 "name": "BaseBdev1", 00:11:09.577 "uuid": "29ad7d4d-fd5e-4358-a806-af521d4c66c6", 00:11:09.577 "is_configured": true, 00:11:09.577 "data_offset": 0, 00:11:09.577 "data_size": 65536 00:11:09.577 }, 00:11:09.577 { 00:11:09.577 "name": null, 00:11:09.577 "uuid": "debd94c6-a635-4cf6-863c-a819c2285093", 00:11:09.577 "is_configured": false, 00:11:09.577 "data_offset": 0, 00:11:09.577 "data_size": 65536 00:11:09.577 }, 00:11:09.577 { 00:11:09.577 "name": null, 00:11:09.577 "uuid": "d22c546b-d21e-406d-9e09-55699e020a80", 00:11:09.577 "is_configured": false, 00:11:09.577 "data_offset": 0, 00:11:09.577 "data_size": 65536 00:11:09.577 } 00:11:09.577 ] 00:11:09.577 }' 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.577 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.835 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.835 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:09.835 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.835 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.094 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.094 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:10.094 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:10.094 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.094 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.094 [2024-12-05 20:04:11.310523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.095 "name": "Existed_Raid", 00:11:10.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.095 "strip_size_kb": 64, 00:11:10.095 "state": "configuring", 00:11:10.095 "raid_level": "concat", 00:11:10.095 "superblock": false, 00:11:10.095 "num_base_bdevs": 3, 00:11:10.095 "num_base_bdevs_discovered": 2, 00:11:10.095 "num_base_bdevs_operational": 3, 00:11:10.095 "base_bdevs_list": [ 00:11:10.095 { 00:11:10.095 "name": "BaseBdev1", 00:11:10.095 "uuid": "29ad7d4d-fd5e-4358-a806-af521d4c66c6", 00:11:10.095 "is_configured": true, 00:11:10.095 "data_offset": 0, 00:11:10.095 "data_size": 65536 00:11:10.095 }, 00:11:10.095 { 00:11:10.095 "name": null, 00:11:10.095 "uuid": "debd94c6-a635-4cf6-863c-a819c2285093", 00:11:10.095 "is_configured": false, 00:11:10.095 "data_offset": 0, 00:11:10.095 "data_size": 65536 00:11:10.095 }, 00:11:10.095 { 00:11:10.095 "name": "BaseBdev3", 00:11:10.095 "uuid": "d22c546b-d21e-406d-9e09-55699e020a80", 00:11:10.095 "is_configured": true, 00:11:10.095 "data_offset": 0, 00:11:10.095 "data_size": 65536 00:11:10.095 } 00:11:10.095 ] 00:11:10.095 }' 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.095 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.354 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.354 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:10.354 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.354 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.614 [2024-12-05 20:04:11.837713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.614 "name": "Existed_Raid", 00:11:10.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.614 "strip_size_kb": 64, 00:11:10.614 "state": "configuring", 00:11:10.614 "raid_level": "concat", 00:11:10.614 "superblock": false, 00:11:10.614 "num_base_bdevs": 3, 00:11:10.614 "num_base_bdevs_discovered": 1, 00:11:10.614 "num_base_bdevs_operational": 3, 00:11:10.614 "base_bdevs_list": [ 00:11:10.614 { 00:11:10.614 "name": null, 00:11:10.614 "uuid": "29ad7d4d-fd5e-4358-a806-af521d4c66c6", 00:11:10.614 "is_configured": false, 00:11:10.614 "data_offset": 0, 00:11:10.614 "data_size": 65536 00:11:10.614 }, 00:11:10.614 { 00:11:10.614 "name": null, 00:11:10.614 "uuid": "debd94c6-a635-4cf6-863c-a819c2285093", 00:11:10.614 "is_configured": false, 00:11:10.614 "data_offset": 0, 00:11:10.614 "data_size": 65536 00:11:10.614 }, 00:11:10.614 { 00:11:10.614 "name": "BaseBdev3", 00:11:10.614 "uuid": "d22c546b-d21e-406d-9e09-55699e020a80", 00:11:10.614 "is_configured": true, 00:11:10.614 "data_offset": 0, 00:11:10.614 "data_size": 65536 00:11:10.614 } 00:11:10.614 ] 00:11:10.614 }' 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.614 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.184 [2024-12-05 20:04:12.478779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.184 "name": "Existed_Raid", 00:11:11.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.184 "strip_size_kb": 64, 00:11:11.184 "state": "configuring", 00:11:11.184 "raid_level": "concat", 00:11:11.184 "superblock": false, 00:11:11.184 "num_base_bdevs": 3, 00:11:11.184 "num_base_bdevs_discovered": 2, 00:11:11.184 "num_base_bdevs_operational": 3, 00:11:11.184 "base_bdevs_list": [ 00:11:11.184 { 00:11:11.184 "name": null, 00:11:11.184 "uuid": "29ad7d4d-fd5e-4358-a806-af521d4c66c6", 00:11:11.184 "is_configured": false, 00:11:11.184 "data_offset": 0, 00:11:11.184 "data_size": 65536 00:11:11.184 }, 00:11:11.184 { 00:11:11.184 "name": "BaseBdev2", 00:11:11.184 "uuid": "debd94c6-a635-4cf6-863c-a819c2285093", 00:11:11.184 "is_configured": true, 00:11:11.184 "data_offset": 0, 00:11:11.184 "data_size": 65536 00:11:11.184 }, 00:11:11.184 { 00:11:11.184 "name": "BaseBdev3", 00:11:11.184 "uuid": "d22c546b-d21e-406d-9e09-55699e020a80", 00:11:11.184 "is_configured": true, 00:11:11.184 "data_offset": 0, 00:11:11.184 "data_size": 65536 00:11:11.184 } 00:11:11.184 ] 00:11:11.184 }' 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.184 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.753 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.753 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.754 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.754 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:11.754 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.754 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:11.754 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.754 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:11.754 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.754 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.754 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 29ad7d4d-fd5e-4358-a806-af521d4c66c6 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.754 [2024-12-05 20:04:13.063489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:11.754 [2024-12-05 20:04:13.063616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:11.754 [2024-12-05 20:04:13.063644] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:11.754 [2024-12-05 20:04:13.063947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:11.754 [2024-12-05 20:04:13.064169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:11.754 [2024-12-05 20:04:13.064217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:11.754 [2024-12-05 20:04:13.064538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.754 NewBaseBdev 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.754 [ 00:11:11.754 { 00:11:11.754 "name": "NewBaseBdev", 00:11:11.754 "aliases": [ 00:11:11.754 "29ad7d4d-fd5e-4358-a806-af521d4c66c6" 00:11:11.754 ], 00:11:11.754 "product_name": "Malloc disk", 00:11:11.754 "block_size": 512, 00:11:11.754 "num_blocks": 65536, 00:11:11.754 "uuid": "29ad7d4d-fd5e-4358-a806-af521d4c66c6", 00:11:11.754 "assigned_rate_limits": { 00:11:11.754 "rw_ios_per_sec": 0, 00:11:11.754 "rw_mbytes_per_sec": 0, 00:11:11.754 "r_mbytes_per_sec": 0, 00:11:11.754 "w_mbytes_per_sec": 0 00:11:11.754 }, 00:11:11.754 "claimed": true, 00:11:11.754 "claim_type": "exclusive_write", 00:11:11.754 "zoned": false, 00:11:11.754 "supported_io_types": { 00:11:11.754 "read": true, 00:11:11.754 "write": true, 00:11:11.754 "unmap": true, 00:11:11.754 "flush": true, 00:11:11.754 "reset": true, 00:11:11.754 "nvme_admin": false, 00:11:11.754 "nvme_io": false, 00:11:11.754 "nvme_io_md": false, 00:11:11.754 "write_zeroes": true, 00:11:11.754 "zcopy": true, 00:11:11.754 "get_zone_info": false, 00:11:11.754 "zone_management": false, 00:11:11.754 "zone_append": false, 00:11:11.754 "compare": false, 00:11:11.754 "compare_and_write": false, 00:11:11.754 "abort": true, 00:11:11.754 "seek_hole": false, 00:11:11.754 "seek_data": false, 00:11:11.754 "copy": true, 00:11:11.754 "nvme_iov_md": false 00:11:11.754 }, 00:11:11.754 "memory_domains": [ 00:11:11.754 { 00:11:11.754 "dma_device_id": "system", 00:11:11.754 "dma_device_type": 1 00:11:11.754 }, 00:11:11.754 { 00:11:11.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.754 "dma_device_type": 2 00:11:11.754 } 00:11:11.754 ], 00:11:11.754 "driver_specific": {} 00:11:11.754 } 00:11:11.754 ] 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.754 "name": "Existed_Raid", 00:11:11.754 "uuid": "6d2b85af-891a-43d1-ac2f-7a00c913e4ff", 00:11:11.754 "strip_size_kb": 64, 00:11:11.754 "state": "online", 00:11:11.754 "raid_level": "concat", 00:11:11.754 "superblock": false, 00:11:11.754 "num_base_bdevs": 3, 00:11:11.754 "num_base_bdevs_discovered": 3, 00:11:11.754 "num_base_bdevs_operational": 3, 00:11:11.754 "base_bdevs_list": [ 00:11:11.754 { 00:11:11.754 "name": "NewBaseBdev", 00:11:11.754 "uuid": "29ad7d4d-fd5e-4358-a806-af521d4c66c6", 00:11:11.754 "is_configured": true, 00:11:11.754 "data_offset": 0, 00:11:11.754 "data_size": 65536 00:11:11.754 }, 00:11:11.754 { 00:11:11.754 "name": "BaseBdev2", 00:11:11.754 "uuid": "debd94c6-a635-4cf6-863c-a819c2285093", 00:11:11.754 "is_configured": true, 00:11:11.754 "data_offset": 0, 00:11:11.754 "data_size": 65536 00:11:11.754 }, 00:11:11.754 { 00:11:11.754 "name": "BaseBdev3", 00:11:11.754 "uuid": "d22c546b-d21e-406d-9e09-55699e020a80", 00:11:11.754 "is_configured": true, 00:11:11.754 "data_offset": 0, 00:11:11.754 "data_size": 65536 00:11:11.754 } 00:11:11.754 ] 00:11:11.754 }' 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.754 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.325 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:12.325 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:12.325 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:12.325 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:12.326 [2024-12-05 20:04:13.583050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:12.326 "name": "Existed_Raid", 00:11:12.326 "aliases": [ 00:11:12.326 "6d2b85af-891a-43d1-ac2f-7a00c913e4ff" 00:11:12.326 ], 00:11:12.326 "product_name": "Raid Volume", 00:11:12.326 "block_size": 512, 00:11:12.326 "num_blocks": 196608, 00:11:12.326 "uuid": "6d2b85af-891a-43d1-ac2f-7a00c913e4ff", 00:11:12.326 "assigned_rate_limits": { 00:11:12.326 "rw_ios_per_sec": 0, 00:11:12.326 "rw_mbytes_per_sec": 0, 00:11:12.326 "r_mbytes_per_sec": 0, 00:11:12.326 "w_mbytes_per_sec": 0 00:11:12.326 }, 00:11:12.326 "claimed": false, 00:11:12.326 "zoned": false, 00:11:12.326 "supported_io_types": { 00:11:12.326 "read": true, 00:11:12.326 "write": true, 00:11:12.326 "unmap": true, 00:11:12.326 "flush": true, 00:11:12.326 "reset": true, 00:11:12.326 "nvme_admin": false, 00:11:12.326 "nvme_io": false, 00:11:12.326 "nvme_io_md": false, 00:11:12.326 "write_zeroes": true, 00:11:12.326 "zcopy": false, 00:11:12.326 "get_zone_info": false, 00:11:12.326 "zone_management": false, 00:11:12.326 "zone_append": false, 00:11:12.326 "compare": false, 00:11:12.326 "compare_and_write": false, 00:11:12.326 "abort": false, 00:11:12.326 "seek_hole": false, 00:11:12.326 "seek_data": false, 00:11:12.326 "copy": false, 00:11:12.326 "nvme_iov_md": false 00:11:12.326 }, 00:11:12.326 "memory_domains": [ 00:11:12.326 { 00:11:12.326 "dma_device_id": "system", 00:11:12.326 "dma_device_type": 1 00:11:12.326 }, 00:11:12.326 { 00:11:12.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.326 "dma_device_type": 2 00:11:12.326 }, 00:11:12.326 { 00:11:12.326 "dma_device_id": "system", 00:11:12.326 "dma_device_type": 1 00:11:12.326 }, 00:11:12.326 { 00:11:12.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.326 "dma_device_type": 2 00:11:12.326 }, 00:11:12.326 { 00:11:12.326 "dma_device_id": "system", 00:11:12.326 "dma_device_type": 1 00:11:12.326 }, 00:11:12.326 { 00:11:12.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.326 "dma_device_type": 2 00:11:12.326 } 00:11:12.326 ], 00:11:12.326 "driver_specific": { 00:11:12.326 "raid": { 00:11:12.326 "uuid": "6d2b85af-891a-43d1-ac2f-7a00c913e4ff", 00:11:12.326 "strip_size_kb": 64, 00:11:12.326 "state": "online", 00:11:12.326 "raid_level": "concat", 00:11:12.326 "superblock": false, 00:11:12.326 "num_base_bdevs": 3, 00:11:12.326 "num_base_bdevs_discovered": 3, 00:11:12.326 "num_base_bdevs_operational": 3, 00:11:12.326 "base_bdevs_list": [ 00:11:12.326 { 00:11:12.326 "name": "NewBaseBdev", 00:11:12.326 "uuid": "29ad7d4d-fd5e-4358-a806-af521d4c66c6", 00:11:12.326 "is_configured": true, 00:11:12.326 "data_offset": 0, 00:11:12.326 "data_size": 65536 00:11:12.326 }, 00:11:12.326 { 00:11:12.326 "name": "BaseBdev2", 00:11:12.326 "uuid": "debd94c6-a635-4cf6-863c-a819c2285093", 00:11:12.326 "is_configured": true, 00:11:12.326 "data_offset": 0, 00:11:12.326 "data_size": 65536 00:11:12.326 }, 00:11:12.326 { 00:11:12.326 "name": "BaseBdev3", 00:11:12.326 "uuid": "d22c546b-d21e-406d-9e09-55699e020a80", 00:11:12.326 "is_configured": true, 00:11:12.326 "data_offset": 0, 00:11:12.326 "data_size": 65536 00:11:12.326 } 00:11:12.326 ] 00:11:12.326 } 00:11:12.326 } 00:11:12.326 }' 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:12.326 BaseBdev2 00:11:12.326 BaseBdev3' 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.326 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.586 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.586 [2024-12-05 20:04:13.870228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:12.586 [2024-12-05 20:04:13.870271] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.587 [2024-12-05 20:04:13.870365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.587 [2024-12-05 20:04:13.870424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.587 [2024-12-05 20:04:13.870437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:12.587 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.587 20:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65725 00:11:12.587 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65725 ']' 00:11:12.587 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65725 00:11:12.587 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:12.587 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.587 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65725 00:11:12.587 killing process with pid 65725 00:11:12.587 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.587 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.587 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65725' 00:11:12.587 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65725 00:11:12.587 [2024-12-05 20:04:13.919813] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.587 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65725 00:11:12.847 [2024-12-05 20:04:14.260825] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:14.233 00:11:14.233 real 0m11.437s 00:11:14.233 user 0m18.211s 00:11:14.233 sys 0m1.954s 00:11:14.233 ************************************ 00:11:14.233 END TEST raid_state_function_test 00:11:14.233 ************************************ 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.233 20:04:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:11:14.233 20:04:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:14.233 20:04:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.233 20:04:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.233 ************************************ 00:11:14.233 START TEST raid_state_function_test_sb 00:11:14.233 ************************************ 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66358 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66358' 00:11:14.233 Process raid pid: 66358 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66358 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66358 ']' 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.233 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.503 [2024-12-05 20:04:15.690557] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:11:14.503 [2024-12-05 20:04:15.690790] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.503 [2024-12-05 20:04:15.870168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.762 [2024-12-05 20:04:16.002346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.022 [2024-12-05 20:04:16.218421] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.022 [2024-12-05 20:04:16.218560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.282 [2024-12-05 20:04:16.579390] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.282 [2024-12-05 20:04:16.579533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.282 [2024-12-05 20:04:16.579603] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.282 [2024-12-05 20:04:16.579638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.282 [2024-12-05 20:04:16.579674] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.282 [2024-12-05 20:04:16.579701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.282 "name": "Existed_Raid", 00:11:15.282 "uuid": "80584c3f-03e1-4853-998c-400016248934", 00:11:15.282 "strip_size_kb": 64, 00:11:15.282 "state": "configuring", 00:11:15.282 "raid_level": "concat", 00:11:15.282 "superblock": true, 00:11:15.282 "num_base_bdevs": 3, 00:11:15.282 "num_base_bdevs_discovered": 0, 00:11:15.282 "num_base_bdevs_operational": 3, 00:11:15.282 "base_bdevs_list": [ 00:11:15.282 { 00:11:15.282 "name": "BaseBdev1", 00:11:15.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.282 "is_configured": false, 00:11:15.282 "data_offset": 0, 00:11:15.282 "data_size": 0 00:11:15.282 }, 00:11:15.282 { 00:11:15.282 "name": "BaseBdev2", 00:11:15.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.282 "is_configured": false, 00:11:15.282 "data_offset": 0, 00:11:15.282 "data_size": 0 00:11:15.282 }, 00:11:15.282 { 00:11:15.282 "name": "BaseBdev3", 00:11:15.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.282 "is_configured": false, 00:11:15.282 "data_offset": 0, 00:11:15.282 "data_size": 0 00:11:15.282 } 00:11:15.282 ] 00:11:15.282 }' 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.282 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.852 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:15.852 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.853 [2024-12-05 20:04:17.046552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:15.853 [2024-12-05 20:04:17.046594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.853 [2024-12-05 20:04:17.058539] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.853 [2024-12-05 20:04:17.058638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.853 [2024-12-05 20:04:17.058685] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.853 [2024-12-05 20:04:17.058716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.853 [2024-12-05 20:04:17.058746] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.853 [2024-12-05 20:04:17.058773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.853 [2024-12-05 20:04:17.108879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.853 BaseBdev1 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.853 [ 00:11:15.853 { 00:11:15.853 "name": "BaseBdev1", 00:11:15.853 "aliases": [ 00:11:15.853 "8f109c5c-2e02-4e89-a13b-202adbdf0408" 00:11:15.853 ], 00:11:15.853 "product_name": "Malloc disk", 00:11:15.853 "block_size": 512, 00:11:15.853 "num_blocks": 65536, 00:11:15.853 "uuid": "8f109c5c-2e02-4e89-a13b-202adbdf0408", 00:11:15.853 "assigned_rate_limits": { 00:11:15.853 "rw_ios_per_sec": 0, 00:11:15.853 "rw_mbytes_per_sec": 0, 00:11:15.853 "r_mbytes_per_sec": 0, 00:11:15.853 "w_mbytes_per_sec": 0 00:11:15.853 }, 00:11:15.853 "claimed": true, 00:11:15.853 "claim_type": "exclusive_write", 00:11:15.853 "zoned": false, 00:11:15.853 "supported_io_types": { 00:11:15.853 "read": true, 00:11:15.853 "write": true, 00:11:15.853 "unmap": true, 00:11:15.853 "flush": true, 00:11:15.853 "reset": true, 00:11:15.853 "nvme_admin": false, 00:11:15.853 "nvme_io": false, 00:11:15.853 "nvme_io_md": false, 00:11:15.853 "write_zeroes": true, 00:11:15.853 "zcopy": true, 00:11:15.853 "get_zone_info": false, 00:11:15.853 "zone_management": false, 00:11:15.853 "zone_append": false, 00:11:15.853 "compare": false, 00:11:15.853 "compare_and_write": false, 00:11:15.853 "abort": true, 00:11:15.853 "seek_hole": false, 00:11:15.853 "seek_data": false, 00:11:15.853 "copy": true, 00:11:15.853 "nvme_iov_md": false 00:11:15.853 }, 00:11:15.853 "memory_domains": [ 00:11:15.853 { 00:11:15.853 "dma_device_id": "system", 00:11:15.853 "dma_device_type": 1 00:11:15.853 }, 00:11:15.853 { 00:11:15.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.853 "dma_device_type": 2 00:11:15.853 } 00:11:15.853 ], 00:11:15.853 "driver_specific": {} 00:11:15.853 } 00:11:15.853 ] 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.853 "name": "Existed_Raid", 00:11:15.853 "uuid": "2b74fc8d-2a83-4362-bbee-00acda87f8df", 00:11:15.853 "strip_size_kb": 64, 00:11:15.853 "state": "configuring", 00:11:15.853 "raid_level": "concat", 00:11:15.853 "superblock": true, 00:11:15.853 "num_base_bdevs": 3, 00:11:15.853 "num_base_bdevs_discovered": 1, 00:11:15.853 "num_base_bdevs_operational": 3, 00:11:15.853 "base_bdevs_list": [ 00:11:15.853 { 00:11:15.853 "name": "BaseBdev1", 00:11:15.853 "uuid": "8f109c5c-2e02-4e89-a13b-202adbdf0408", 00:11:15.853 "is_configured": true, 00:11:15.853 "data_offset": 2048, 00:11:15.853 "data_size": 63488 00:11:15.853 }, 00:11:15.853 { 00:11:15.853 "name": "BaseBdev2", 00:11:15.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.853 "is_configured": false, 00:11:15.853 "data_offset": 0, 00:11:15.853 "data_size": 0 00:11:15.853 }, 00:11:15.853 { 00:11:15.853 "name": "BaseBdev3", 00:11:15.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.853 "is_configured": false, 00:11:15.853 "data_offset": 0, 00:11:15.853 "data_size": 0 00:11:15.853 } 00:11:15.853 ] 00:11:15.853 }' 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.853 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.113 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:16.113 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.113 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.113 [2024-12-05 20:04:17.528247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:16.113 [2024-12-05 20:04:17.528309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:16.113 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.113 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:16.113 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.113 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.113 [2024-12-05 20:04:17.540287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.113 [2024-12-05 20:04:17.542284] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:16.113 [2024-12-05 20:04:17.542330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:16.113 [2024-12-05 20:04:17.542342] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:16.113 [2024-12-05 20:04:17.542352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:16.113 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.113 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:16.113 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.113 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:16.113 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.113 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.113 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.113 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.372 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.372 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.372 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.372 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.372 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.372 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.372 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.372 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.372 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.372 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.372 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.372 "name": "Existed_Raid", 00:11:16.372 "uuid": "097598ee-cf7e-4e00-be26-8f04907116c5", 00:11:16.372 "strip_size_kb": 64, 00:11:16.372 "state": "configuring", 00:11:16.372 "raid_level": "concat", 00:11:16.372 "superblock": true, 00:11:16.372 "num_base_bdevs": 3, 00:11:16.372 "num_base_bdevs_discovered": 1, 00:11:16.372 "num_base_bdevs_operational": 3, 00:11:16.372 "base_bdevs_list": [ 00:11:16.372 { 00:11:16.372 "name": "BaseBdev1", 00:11:16.372 "uuid": "8f109c5c-2e02-4e89-a13b-202adbdf0408", 00:11:16.372 "is_configured": true, 00:11:16.372 "data_offset": 2048, 00:11:16.372 "data_size": 63488 00:11:16.372 }, 00:11:16.372 { 00:11:16.372 "name": "BaseBdev2", 00:11:16.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.372 "is_configured": false, 00:11:16.372 "data_offset": 0, 00:11:16.372 "data_size": 0 00:11:16.372 }, 00:11:16.372 { 00:11:16.372 "name": "BaseBdev3", 00:11:16.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.372 "is_configured": false, 00:11:16.372 "data_offset": 0, 00:11:16.372 "data_size": 0 00:11:16.372 } 00:11:16.373 ] 00:11:16.373 }' 00:11:16.373 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.373 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.632 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:16.632 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.632 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.632 [2024-12-05 20:04:18.013309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.632 BaseBdev2 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.632 [ 00:11:16.632 { 00:11:16.632 "name": "BaseBdev2", 00:11:16.632 "aliases": [ 00:11:16.632 "faaaff29-e153-4ae6-b9fb-d7fde466201d" 00:11:16.632 ], 00:11:16.632 "product_name": "Malloc disk", 00:11:16.632 "block_size": 512, 00:11:16.632 "num_blocks": 65536, 00:11:16.632 "uuid": "faaaff29-e153-4ae6-b9fb-d7fde466201d", 00:11:16.632 "assigned_rate_limits": { 00:11:16.632 "rw_ios_per_sec": 0, 00:11:16.632 "rw_mbytes_per_sec": 0, 00:11:16.632 "r_mbytes_per_sec": 0, 00:11:16.632 "w_mbytes_per_sec": 0 00:11:16.632 }, 00:11:16.632 "claimed": true, 00:11:16.632 "claim_type": "exclusive_write", 00:11:16.632 "zoned": false, 00:11:16.632 "supported_io_types": { 00:11:16.632 "read": true, 00:11:16.632 "write": true, 00:11:16.632 "unmap": true, 00:11:16.632 "flush": true, 00:11:16.632 "reset": true, 00:11:16.632 "nvme_admin": false, 00:11:16.632 "nvme_io": false, 00:11:16.632 "nvme_io_md": false, 00:11:16.632 "write_zeroes": true, 00:11:16.632 "zcopy": true, 00:11:16.632 "get_zone_info": false, 00:11:16.632 "zone_management": false, 00:11:16.632 "zone_append": false, 00:11:16.632 "compare": false, 00:11:16.632 "compare_and_write": false, 00:11:16.632 "abort": true, 00:11:16.632 "seek_hole": false, 00:11:16.632 "seek_data": false, 00:11:16.632 "copy": true, 00:11:16.632 "nvme_iov_md": false 00:11:16.632 }, 00:11:16.632 "memory_domains": [ 00:11:16.632 { 00:11:16.632 "dma_device_id": "system", 00:11:16.632 "dma_device_type": 1 00:11:16.632 }, 00:11:16.632 { 00:11:16.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.632 "dma_device_type": 2 00:11:16.632 } 00:11:16.632 ], 00:11:16.632 "driver_specific": {} 00:11:16.632 } 00:11:16.632 ] 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.632 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.633 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.633 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.892 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.892 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.892 "name": "Existed_Raid", 00:11:16.892 "uuid": "097598ee-cf7e-4e00-be26-8f04907116c5", 00:11:16.892 "strip_size_kb": 64, 00:11:16.892 "state": "configuring", 00:11:16.892 "raid_level": "concat", 00:11:16.892 "superblock": true, 00:11:16.892 "num_base_bdevs": 3, 00:11:16.892 "num_base_bdevs_discovered": 2, 00:11:16.892 "num_base_bdevs_operational": 3, 00:11:16.892 "base_bdevs_list": [ 00:11:16.892 { 00:11:16.892 "name": "BaseBdev1", 00:11:16.892 "uuid": "8f109c5c-2e02-4e89-a13b-202adbdf0408", 00:11:16.892 "is_configured": true, 00:11:16.892 "data_offset": 2048, 00:11:16.892 "data_size": 63488 00:11:16.892 }, 00:11:16.892 { 00:11:16.892 "name": "BaseBdev2", 00:11:16.892 "uuid": "faaaff29-e153-4ae6-b9fb-d7fde466201d", 00:11:16.892 "is_configured": true, 00:11:16.892 "data_offset": 2048, 00:11:16.892 "data_size": 63488 00:11:16.892 }, 00:11:16.892 { 00:11:16.892 "name": "BaseBdev3", 00:11:16.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.892 "is_configured": false, 00:11:16.892 "data_offset": 0, 00:11:16.892 "data_size": 0 00:11:16.892 } 00:11:16.892 ] 00:11:16.892 }' 00:11:16.892 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.892 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.152 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:17.152 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.153 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.153 [2024-12-05 20:04:18.548868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.153 [2024-12-05 20:04:18.549209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:17.153 [2024-12-05 20:04:18.549237] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:17.153 [2024-12-05 20:04:18.549662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:17.153 BaseBdev3 00:11:17.153 [2024-12-05 20:04:18.549854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:17.153 [2024-12-05 20:04:18.549882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:17.153 [2024-12-05 20:04:18.550071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.153 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.153 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:17.153 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:17.153 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.153 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:17.153 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.153 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.153 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.153 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.153 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.153 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.153 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:17.153 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.153 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.153 [ 00:11:17.153 { 00:11:17.153 "name": "BaseBdev3", 00:11:17.153 "aliases": [ 00:11:17.153 "fc0bde6b-646d-4dc1-ae4d-03aa830d41ec" 00:11:17.153 ], 00:11:17.153 "product_name": "Malloc disk", 00:11:17.153 "block_size": 512, 00:11:17.153 "num_blocks": 65536, 00:11:17.153 "uuid": "fc0bde6b-646d-4dc1-ae4d-03aa830d41ec", 00:11:17.153 "assigned_rate_limits": { 00:11:17.153 "rw_ios_per_sec": 0, 00:11:17.153 "rw_mbytes_per_sec": 0, 00:11:17.153 "r_mbytes_per_sec": 0, 00:11:17.153 "w_mbytes_per_sec": 0 00:11:17.153 }, 00:11:17.153 "claimed": true, 00:11:17.153 "claim_type": "exclusive_write", 00:11:17.153 "zoned": false, 00:11:17.153 "supported_io_types": { 00:11:17.153 "read": true, 00:11:17.153 "write": true, 00:11:17.153 "unmap": true, 00:11:17.153 "flush": true, 00:11:17.153 "reset": true, 00:11:17.153 "nvme_admin": false, 00:11:17.153 "nvme_io": false, 00:11:17.153 "nvme_io_md": false, 00:11:17.153 "write_zeroes": true, 00:11:17.153 "zcopy": true, 00:11:17.153 "get_zone_info": false, 00:11:17.153 "zone_management": false, 00:11:17.153 "zone_append": false, 00:11:17.153 "compare": false, 00:11:17.413 "compare_and_write": false, 00:11:17.413 "abort": true, 00:11:17.413 "seek_hole": false, 00:11:17.413 "seek_data": false, 00:11:17.413 "copy": true, 00:11:17.413 "nvme_iov_md": false 00:11:17.413 }, 00:11:17.413 "memory_domains": [ 00:11:17.413 { 00:11:17.413 "dma_device_id": "system", 00:11:17.413 "dma_device_type": 1 00:11:17.413 }, 00:11:17.413 { 00:11:17.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.413 "dma_device_type": 2 00:11:17.413 } 00:11:17.413 ], 00:11:17.413 "driver_specific": {} 00:11:17.413 } 00:11:17.413 ] 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.413 "name": "Existed_Raid", 00:11:17.413 "uuid": "097598ee-cf7e-4e00-be26-8f04907116c5", 00:11:17.413 "strip_size_kb": 64, 00:11:17.413 "state": "online", 00:11:17.413 "raid_level": "concat", 00:11:17.413 "superblock": true, 00:11:17.413 "num_base_bdevs": 3, 00:11:17.413 "num_base_bdevs_discovered": 3, 00:11:17.413 "num_base_bdevs_operational": 3, 00:11:17.413 "base_bdevs_list": [ 00:11:17.413 { 00:11:17.413 "name": "BaseBdev1", 00:11:17.413 "uuid": "8f109c5c-2e02-4e89-a13b-202adbdf0408", 00:11:17.413 "is_configured": true, 00:11:17.413 "data_offset": 2048, 00:11:17.413 "data_size": 63488 00:11:17.413 }, 00:11:17.413 { 00:11:17.413 "name": "BaseBdev2", 00:11:17.413 "uuid": "faaaff29-e153-4ae6-b9fb-d7fde466201d", 00:11:17.413 "is_configured": true, 00:11:17.413 "data_offset": 2048, 00:11:17.413 "data_size": 63488 00:11:17.413 }, 00:11:17.413 { 00:11:17.413 "name": "BaseBdev3", 00:11:17.413 "uuid": "fc0bde6b-646d-4dc1-ae4d-03aa830d41ec", 00:11:17.413 "is_configured": true, 00:11:17.413 "data_offset": 2048, 00:11:17.413 "data_size": 63488 00:11:17.413 } 00:11:17.413 ] 00:11:17.413 }' 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.413 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.673 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:17.673 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:17.673 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:17.673 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:17.673 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:17.673 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:17.673 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:17.674 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:17.674 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.674 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.674 [2024-12-05 20:04:19.016602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.674 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.674 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:17.674 "name": "Existed_Raid", 00:11:17.674 "aliases": [ 00:11:17.674 "097598ee-cf7e-4e00-be26-8f04907116c5" 00:11:17.674 ], 00:11:17.674 "product_name": "Raid Volume", 00:11:17.674 "block_size": 512, 00:11:17.674 "num_blocks": 190464, 00:11:17.674 "uuid": "097598ee-cf7e-4e00-be26-8f04907116c5", 00:11:17.674 "assigned_rate_limits": { 00:11:17.674 "rw_ios_per_sec": 0, 00:11:17.674 "rw_mbytes_per_sec": 0, 00:11:17.674 "r_mbytes_per_sec": 0, 00:11:17.674 "w_mbytes_per_sec": 0 00:11:17.674 }, 00:11:17.674 "claimed": false, 00:11:17.674 "zoned": false, 00:11:17.674 "supported_io_types": { 00:11:17.674 "read": true, 00:11:17.674 "write": true, 00:11:17.674 "unmap": true, 00:11:17.674 "flush": true, 00:11:17.674 "reset": true, 00:11:17.674 "nvme_admin": false, 00:11:17.674 "nvme_io": false, 00:11:17.674 "nvme_io_md": false, 00:11:17.674 "write_zeroes": true, 00:11:17.674 "zcopy": false, 00:11:17.674 "get_zone_info": false, 00:11:17.674 "zone_management": false, 00:11:17.674 "zone_append": false, 00:11:17.674 "compare": false, 00:11:17.674 "compare_and_write": false, 00:11:17.674 "abort": false, 00:11:17.674 "seek_hole": false, 00:11:17.674 "seek_data": false, 00:11:17.674 "copy": false, 00:11:17.674 "nvme_iov_md": false 00:11:17.674 }, 00:11:17.674 "memory_domains": [ 00:11:17.674 { 00:11:17.674 "dma_device_id": "system", 00:11:17.674 "dma_device_type": 1 00:11:17.674 }, 00:11:17.674 { 00:11:17.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.674 "dma_device_type": 2 00:11:17.674 }, 00:11:17.674 { 00:11:17.674 "dma_device_id": "system", 00:11:17.674 "dma_device_type": 1 00:11:17.674 }, 00:11:17.674 { 00:11:17.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.674 "dma_device_type": 2 00:11:17.674 }, 00:11:17.674 { 00:11:17.674 "dma_device_id": "system", 00:11:17.674 "dma_device_type": 1 00:11:17.674 }, 00:11:17.674 { 00:11:17.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.674 "dma_device_type": 2 00:11:17.674 } 00:11:17.674 ], 00:11:17.674 "driver_specific": { 00:11:17.674 "raid": { 00:11:17.674 "uuid": "097598ee-cf7e-4e00-be26-8f04907116c5", 00:11:17.674 "strip_size_kb": 64, 00:11:17.674 "state": "online", 00:11:17.674 "raid_level": "concat", 00:11:17.674 "superblock": true, 00:11:17.674 "num_base_bdevs": 3, 00:11:17.674 "num_base_bdevs_discovered": 3, 00:11:17.674 "num_base_bdevs_operational": 3, 00:11:17.674 "base_bdevs_list": [ 00:11:17.674 { 00:11:17.674 "name": "BaseBdev1", 00:11:17.674 "uuid": "8f109c5c-2e02-4e89-a13b-202adbdf0408", 00:11:17.674 "is_configured": true, 00:11:17.674 "data_offset": 2048, 00:11:17.674 "data_size": 63488 00:11:17.674 }, 00:11:17.674 { 00:11:17.674 "name": "BaseBdev2", 00:11:17.674 "uuid": "faaaff29-e153-4ae6-b9fb-d7fde466201d", 00:11:17.674 "is_configured": true, 00:11:17.674 "data_offset": 2048, 00:11:17.674 "data_size": 63488 00:11:17.674 }, 00:11:17.674 { 00:11:17.674 "name": "BaseBdev3", 00:11:17.674 "uuid": "fc0bde6b-646d-4dc1-ae4d-03aa830d41ec", 00:11:17.674 "is_configured": true, 00:11:17.674 "data_offset": 2048, 00:11:17.674 "data_size": 63488 00:11:17.674 } 00:11:17.674 ] 00:11:17.674 } 00:11:17.674 } 00:11:17.674 }' 00:11:17.674 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.674 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:17.674 BaseBdev2 00:11:17.674 BaseBdev3' 00:11:17.674 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.933 [2024-12-05 20:04:19.255922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:17.933 [2024-12-05 20:04:19.255952] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.933 [2024-12-05 20:04:19.256008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.933 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.192 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.192 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.192 "name": "Existed_Raid", 00:11:18.192 "uuid": "097598ee-cf7e-4e00-be26-8f04907116c5", 00:11:18.192 "strip_size_kb": 64, 00:11:18.192 "state": "offline", 00:11:18.192 "raid_level": "concat", 00:11:18.192 "superblock": true, 00:11:18.192 "num_base_bdevs": 3, 00:11:18.192 "num_base_bdevs_discovered": 2, 00:11:18.192 "num_base_bdevs_operational": 2, 00:11:18.192 "base_bdevs_list": [ 00:11:18.192 { 00:11:18.192 "name": null, 00:11:18.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.192 "is_configured": false, 00:11:18.192 "data_offset": 0, 00:11:18.192 "data_size": 63488 00:11:18.192 }, 00:11:18.192 { 00:11:18.192 "name": "BaseBdev2", 00:11:18.192 "uuid": "faaaff29-e153-4ae6-b9fb-d7fde466201d", 00:11:18.192 "is_configured": true, 00:11:18.192 "data_offset": 2048, 00:11:18.192 "data_size": 63488 00:11:18.192 }, 00:11:18.192 { 00:11:18.192 "name": "BaseBdev3", 00:11:18.192 "uuid": "fc0bde6b-646d-4dc1-ae4d-03aa830d41ec", 00:11:18.192 "is_configured": true, 00:11:18.192 "data_offset": 2048, 00:11:18.192 "data_size": 63488 00:11:18.192 } 00:11:18.192 ] 00:11:18.192 }' 00:11:18.192 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.192 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.451 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:18.451 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.451 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.451 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.451 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.451 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:18.451 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.451 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.451 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.451 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:18.451 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.451 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.451 [2024-12-05 20:04:19.870350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:18.710 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.710 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.710 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.710 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:18.710 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.710 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.710 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.710 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.710 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.710 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.710 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:18.710 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.710 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.710 [2024-12-05 20:04:20.028614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:18.710 [2024-12-05 20:04:20.028676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:18.710 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.710 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.710 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.710 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.710 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.710 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:18.710 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.969 BaseBdev2 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.969 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.970 [ 00:11:18.970 { 00:11:18.970 "name": "BaseBdev2", 00:11:18.970 "aliases": [ 00:11:18.970 "ea5fad26-1387-4a7e-a4ae-5642a75ae855" 00:11:18.970 ], 00:11:18.970 "product_name": "Malloc disk", 00:11:18.970 "block_size": 512, 00:11:18.970 "num_blocks": 65536, 00:11:18.970 "uuid": "ea5fad26-1387-4a7e-a4ae-5642a75ae855", 00:11:18.970 "assigned_rate_limits": { 00:11:18.970 "rw_ios_per_sec": 0, 00:11:18.970 "rw_mbytes_per_sec": 0, 00:11:18.970 "r_mbytes_per_sec": 0, 00:11:18.970 "w_mbytes_per_sec": 0 00:11:18.970 }, 00:11:18.970 "claimed": false, 00:11:18.970 "zoned": false, 00:11:18.970 "supported_io_types": { 00:11:18.970 "read": true, 00:11:18.970 "write": true, 00:11:18.970 "unmap": true, 00:11:18.970 "flush": true, 00:11:18.970 "reset": true, 00:11:18.970 "nvme_admin": false, 00:11:18.970 "nvme_io": false, 00:11:18.970 "nvme_io_md": false, 00:11:18.970 "write_zeroes": true, 00:11:18.970 "zcopy": true, 00:11:18.970 "get_zone_info": false, 00:11:18.970 "zone_management": false, 00:11:18.970 "zone_append": false, 00:11:18.970 "compare": false, 00:11:18.970 "compare_and_write": false, 00:11:18.970 "abort": true, 00:11:18.970 "seek_hole": false, 00:11:18.970 "seek_data": false, 00:11:18.970 "copy": true, 00:11:18.970 "nvme_iov_md": false 00:11:18.970 }, 00:11:18.970 "memory_domains": [ 00:11:18.970 { 00:11:18.970 "dma_device_id": "system", 00:11:18.970 "dma_device_type": 1 00:11:18.970 }, 00:11:18.970 { 00:11:18.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.970 "dma_device_type": 2 00:11:18.970 } 00:11:18.970 ], 00:11:18.970 "driver_specific": {} 00:11:18.970 } 00:11:18.970 ] 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.970 BaseBdev3 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.970 [ 00:11:18.970 { 00:11:18.970 "name": "BaseBdev3", 00:11:18.970 "aliases": [ 00:11:18.970 "4ab9fb57-4744-47ac-8df2-4905fb92f09c" 00:11:18.970 ], 00:11:18.970 "product_name": "Malloc disk", 00:11:18.970 "block_size": 512, 00:11:18.970 "num_blocks": 65536, 00:11:18.970 "uuid": "4ab9fb57-4744-47ac-8df2-4905fb92f09c", 00:11:18.970 "assigned_rate_limits": { 00:11:18.970 "rw_ios_per_sec": 0, 00:11:18.970 "rw_mbytes_per_sec": 0, 00:11:18.970 "r_mbytes_per_sec": 0, 00:11:18.970 "w_mbytes_per_sec": 0 00:11:18.970 }, 00:11:18.970 "claimed": false, 00:11:18.970 "zoned": false, 00:11:18.970 "supported_io_types": { 00:11:18.970 "read": true, 00:11:18.970 "write": true, 00:11:18.970 "unmap": true, 00:11:18.970 "flush": true, 00:11:18.970 "reset": true, 00:11:18.970 "nvme_admin": false, 00:11:18.970 "nvme_io": false, 00:11:18.970 "nvme_io_md": false, 00:11:18.970 "write_zeroes": true, 00:11:18.970 "zcopy": true, 00:11:18.970 "get_zone_info": false, 00:11:18.970 "zone_management": false, 00:11:18.970 "zone_append": false, 00:11:18.970 "compare": false, 00:11:18.970 "compare_and_write": false, 00:11:18.970 "abort": true, 00:11:18.970 "seek_hole": false, 00:11:18.970 "seek_data": false, 00:11:18.970 "copy": true, 00:11:18.970 "nvme_iov_md": false 00:11:18.970 }, 00:11:18.970 "memory_domains": [ 00:11:18.970 { 00:11:18.970 "dma_device_id": "system", 00:11:18.970 "dma_device_type": 1 00:11:18.970 }, 00:11:18.970 { 00:11:18.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.970 "dma_device_type": 2 00:11:18.970 } 00:11:18.970 ], 00:11:18.970 "driver_specific": {} 00:11:18.970 } 00:11:18.970 ] 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.970 [2024-12-05 20:04:20.350435] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.970 [2024-12-05 20:04:20.350539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.970 [2024-12-05 20:04:20.350590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.970 [2024-12-05 20:04:20.352597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.970 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.230 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.230 "name": "Existed_Raid", 00:11:19.230 "uuid": "da79d495-c3d7-4a71-9d6f-e78cb00421ed", 00:11:19.230 "strip_size_kb": 64, 00:11:19.230 "state": "configuring", 00:11:19.230 "raid_level": "concat", 00:11:19.230 "superblock": true, 00:11:19.230 "num_base_bdevs": 3, 00:11:19.230 "num_base_bdevs_discovered": 2, 00:11:19.230 "num_base_bdevs_operational": 3, 00:11:19.230 "base_bdevs_list": [ 00:11:19.230 { 00:11:19.230 "name": "BaseBdev1", 00:11:19.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.230 "is_configured": false, 00:11:19.230 "data_offset": 0, 00:11:19.230 "data_size": 0 00:11:19.230 }, 00:11:19.230 { 00:11:19.230 "name": "BaseBdev2", 00:11:19.230 "uuid": "ea5fad26-1387-4a7e-a4ae-5642a75ae855", 00:11:19.230 "is_configured": true, 00:11:19.230 "data_offset": 2048, 00:11:19.230 "data_size": 63488 00:11:19.230 }, 00:11:19.230 { 00:11:19.230 "name": "BaseBdev3", 00:11:19.230 "uuid": "4ab9fb57-4744-47ac-8df2-4905fb92f09c", 00:11:19.230 "is_configured": true, 00:11:19.230 "data_offset": 2048, 00:11:19.230 "data_size": 63488 00:11:19.230 } 00:11:19.230 ] 00:11:19.230 }' 00:11:19.230 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.230 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.490 [2024-12-05 20:04:20.797701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.490 "name": "Existed_Raid", 00:11:19.490 "uuid": "da79d495-c3d7-4a71-9d6f-e78cb00421ed", 00:11:19.490 "strip_size_kb": 64, 00:11:19.490 "state": "configuring", 00:11:19.490 "raid_level": "concat", 00:11:19.490 "superblock": true, 00:11:19.490 "num_base_bdevs": 3, 00:11:19.490 "num_base_bdevs_discovered": 1, 00:11:19.490 "num_base_bdevs_operational": 3, 00:11:19.490 "base_bdevs_list": [ 00:11:19.490 { 00:11:19.490 "name": "BaseBdev1", 00:11:19.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.490 "is_configured": false, 00:11:19.490 "data_offset": 0, 00:11:19.490 "data_size": 0 00:11:19.490 }, 00:11:19.490 { 00:11:19.490 "name": null, 00:11:19.490 "uuid": "ea5fad26-1387-4a7e-a4ae-5642a75ae855", 00:11:19.490 "is_configured": false, 00:11:19.490 "data_offset": 0, 00:11:19.490 "data_size": 63488 00:11:19.490 }, 00:11:19.490 { 00:11:19.490 "name": "BaseBdev3", 00:11:19.490 "uuid": "4ab9fb57-4744-47ac-8df2-4905fb92f09c", 00:11:19.490 "is_configured": true, 00:11:19.490 "data_offset": 2048, 00:11:19.490 "data_size": 63488 00:11:19.490 } 00:11:19.490 ] 00:11:19.490 }' 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.490 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.060 [2024-12-05 20:04:21.276898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.060 BaseBdev1 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.060 [ 00:11:20.060 { 00:11:20.060 "name": "BaseBdev1", 00:11:20.060 "aliases": [ 00:11:20.060 "1b81ec18-e183-4b06-af19-87eb7dd63ceb" 00:11:20.060 ], 00:11:20.060 "product_name": "Malloc disk", 00:11:20.060 "block_size": 512, 00:11:20.060 "num_blocks": 65536, 00:11:20.060 "uuid": "1b81ec18-e183-4b06-af19-87eb7dd63ceb", 00:11:20.060 "assigned_rate_limits": { 00:11:20.060 "rw_ios_per_sec": 0, 00:11:20.060 "rw_mbytes_per_sec": 0, 00:11:20.060 "r_mbytes_per_sec": 0, 00:11:20.060 "w_mbytes_per_sec": 0 00:11:20.060 }, 00:11:20.060 "claimed": true, 00:11:20.060 "claim_type": "exclusive_write", 00:11:20.060 "zoned": false, 00:11:20.060 "supported_io_types": { 00:11:20.060 "read": true, 00:11:20.060 "write": true, 00:11:20.060 "unmap": true, 00:11:20.060 "flush": true, 00:11:20.060 "reset": true, 00:11:20.060 "nvme_admin": false, 00:11:20.060 "nvme_io": false, 00:11:20.060 "nvme_io_md": false, 00:11:20.060 "write_zeroes": true, 00:11:20.060 "zcopy": true, 00:11:20.060 "get_zone_info": false, 00:11:20.060 "zone_management": false, 00:11:20.060 "zone_append": false, 00:11:20.060 "compare": false, 00:11:20.060 "compare_and_write": false, 00:11:20.060 "abort": true, 00:11:20.060 "seek_hole": false, 00:11:20.060 "seek_data": false, 00:11:20.060 "copy": true, 00:11:20.060 "nvme_iov_md": false 00:11:20.060 }, 00:11:20.060 "memory_domains": [ 00:11:20.060 { 00:11:20.060 "dma_device_id": "system", 00:11:20.060 "dma_device_type": 1 00:11:20.060 }, 00:11:20.060 { 00:11:20.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.060 "dma_device_type": 2 00:11:20.060 } 00:11:20.060 ], 00:11:20.060 "driver_specific": {} 00:11:20.060 } 00:11:20.060 ] 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.060 "name": "Existed_Raid", 00:11:20.060 "uuid": "da79d495-c3d7-4a71-9d6f-e78cb00421ed", 00:11:20.060 "strip_size_kb": 64, 00:11:20.060 "state": "configuring", 00:11:20.060 "raid_level": "concat", 00:11:20.060 "superblock": true, 00:11:20.060 "num_base_bdevs": 3, 00:11:20.060 "num_base_bdevs_discovered": 2, 00:11:20.060 "num_base_bdevs_operational": 3, 00:11:20.060 "base_bdevs_list": [ 00:11:20.060 { 00:11:20.060 "name": "BaseBdev1", 00:11:20.060 "uuid": "1b81ec18-e183-4b06-af19-87eb7dd63ceb", 00:11:20.060 "is_configured": true, 00:11:20.060 "data_offset": 2048, 00:11:20.060 "data_size": 63488 00:11:20.060 }, 00:11:20.060 { 00:11:20.060 "name": null, 00:11:20.060 "uuid": "ea5fad26-1387-4a7e-a4ae-5642a75ae855", 00:11:20.060 "is_configured": false, 00:11:20.060 "data_offset": 0, 00:11:20.060 "data_size": 63488 00:11:20.060 }, 00:11:20.060 { 00:11:20.060 "name": "BaseBdev3", 00:11:20.060 "uuid": "4ab9fb57-4744-47ac-8df2-4905fb92f09c", 00:11:20.060 "is_configured": true, 00:11:20.060 "data_offset": 2048, 00:11:20.060 "data_size": 63488 00:11:20.060 } 00:11:20.060 ] 00:11:20.060 }' 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.060 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.320 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:20.320 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.320 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.320 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.579 [2024-12-05 20:04:21.788123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.579 "name": "Existed_Raid", 00:11:20.579 "uuid": "da79d495-c3d7-4a71-9d6f-e78cb00421ed", 00:11:20.579 "strip_size_kb": 64, 00:11:20.579 "state": "configuring", 00:11:20.579 "raid_level": "concat", 00:11:20.579 "superblock": true, 00:11:20.579 "num_base_bdevs": 3, 00:11:20.579 "num_base_bdevs_discovered": 1, 00:11:20.579 "num_base_bdevs_operational": 3, 00:11:20.579 "base_bdevs_list": [ 00:11:20.579 { 00:11:20.579 "name": "BaseBdev1", 00:11:20.579 "uuid": "1b81ec18-e183-4b06-af19-87eb7dd63ceb", 00:11:20.579 "is_configured": true, 00:11:20.579 "data_offset": 2048, 00:11:20.579 "data_size": 63488 00:11:20.579 }, 00:11:20.579 { 00:11:20.579 "name": null, 00:11:20.579 "uuid": "ea5fad26-1387-4a7e-a4ae-5642a75ae855", 00:11:20.579 "is_configured": false, 00:11:20.579 "data_offset": 0, 00:11:20.579 "data_size": 63488 00:11:20.579 }, 00:11:20.579 { 00:11:20.579 "name": null, 00:11:20.579 "uuid": "4ab9fb57-4744-47ac-8df2-4905fb92f09c", 00:11:20.579 "is_configured": false, 00:11:20.579 "data_offset": 0, 00:11:20.579 "data_size": 63488 00:11:20.579 } 00:11:20.579 ] 00:11:20.579 }' 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.579 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.838 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:20.838 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.838 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.838 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.117 [2024-12-05 20:04:22.303286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.117 "name": "Existed_Raid", 00:11:21.117 "uuid": "da79d495-c3d7-4a71-9d6f-e78cb00421ed", 00:11:21.117 "strip_size_kb": 64, 00:11:21.117 "state": "configuring", 00:11:21.117 "raid_level": "concat", 00:11:21.117 "superblock": true, 00:11:21.117 "num_base_bdevs": 3, 00:11:21.117 "num_base_bdevs_discovered": 2, 00:11:21.117 "num_base_bdevs_operational": 3, 00:11:21.117 "base_bdevs_list": [ 00:11:21.117 { 00:11:21.117 "name": "BaseBdev1", 00:11:21.117 "uuid": "1b81ec18-e183-4b06-af19-87eb7dd63ceb", 00:11:21.117 "is_configured": true, 00:11:21.117 "data_offset": 2048, 00:11:21.117 "data_size": 63488 00:11:21.117 }, 00:11:21.117 { 00:11:21.117 "name": null, 00:11:21.117 "uuid": "ea5fad26-1387-4a7e-a4ae-5642a75ae855", 00:11:21.117 "is_configured": false, 00:11:21.117 "data_offset": 0, 00:11:21.117 "data_size": 63488 00:11:21.117 }, 00:11:21.117 { 00:11:21.117 "name": "BaseBdev3", 00:11:21.117 "uuid": "4ab9fb57-4744-47ac-8df2-4905fb92f09c", 00:11:21.117 "is_configured": true, 00:11:21.117 "data_offset": 2048, 00:11:21.117 "data_size": 63488 00:11:21.117 } 00:11:21.117 ] 00:11:21.117 }' 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.117 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.397 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.397 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.397 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.397 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:21.397 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.656 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:21.656 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:21.656 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.656 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.656 [2024-12-05 20:04:22.854389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:21.656 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.656 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:21.656 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.656 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.656 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.656 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.656 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.656 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.656 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.656 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.656 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.657 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.657 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.657 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.657 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.657 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.657 "name": "Existed_Raid", 00:11:21.657 "uuid": "da79d495-c3d7-4a71-9d6f-e78cb00421ed", 00:11:21.657 "strip_size_kb": 64, 00:11:21.657 "state": "configuring", 00:11:21.657 "raid_level": "concat", 00:11:21.657 "superblock": true, 00:11:21.657 "num_base_bdevs": 3, 00:11:21.657 "num_base_bdevs_discovered": 1, 00:11:21.657 "num_base_bdevs_operational": 3, 00:11:21.657 "base_bdevs_list": [ 00:11:21.657 { 00:11:21.657 "name": null, 00:11:21.657 "uuid": "1b81ec18-e183-4b06-af19-87eb7dd63ceb", 00:11:21.657 "is_configured": false, 00:11:21.657 "data_offset": 0, 00:11:21.657 "data_size": 63488 00:11:21.657 }, 00:11:21.657 { 00:11:21.657 "name": null, 00:11:21.657 "uuid": "ea5fad26-1387-4a7e-a4ae-5642a75ae855", 00:11:21.657 "is_configured": false, 00:11:21.657 "data_offset": 0, 00:11:21.657 "data_size": 63488 00:11:21.657 }, 00:11:21.657 { 00:11:21.657 "name": "BaseBdev3", 00:11:21.657 "uuid": "4ab9fb57-4744-47ac-8df2-4905fb92f09c", 00:11:21.657 "is_configured": true, 00:11:21.657 "data_offset": 2048, 00:11:21.657 "data_size": 63488 00:11:21.657 } 00:11:21.657 ] 00:11:21.657 }' 00:11:21.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.657 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.224 [2024-12-05 20:04:23.472812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.224 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.224 "name": "Existed_Raid", 00:11:22.224 "uuid": "da79d495-c3d7-4a71-9d6f-e78cb00421ed", 00:11:22.224 "strip_size_kb": 64, 00:11:22.224 "state": "configuring", 00:11:22.224 "raid_level": "concat", 00:11:22.224 "superblock": true, 00:11:22.224 "num_base_bdevs": 3, 00:11:22.224 "num_base_bdevs_discovered": 2, 00:11:22.224 "num_base_bdevs_operational": 3, 00:11:22.224 "base_bdevs_list": [ 00:11:22.224 { 00:11:22.224 "name": null, 00:11:22.224 "uuid": "1b81ec18-e183-4b06-af19-87eb7dd63ceb", 00:11:22.224 "is_configured": false, 00:11:22.224 "data_offset": 0, 00:11:22.224 "data_size": 63488 00:11:22.224 }, 00:11:22.224 { 00:11:22.224 "name": "BaseBdev2", 00:11:22.224 "uuid": "ea5fad26-1387-4a7e-a4ae-5642a75ae855", 00:11:22.224 "is_configured": true, 00:11:22.224 "data_offset": 2048, 00:11:22.224 "data_size": 63488 00:11:22.224 }, 00:11:22.224 { 00:11:22.224 "name": "BaseBdev3", 00:11:22.224 "uuid": "4ab9fb57-4744-47ac-8df2-4905fb92f09c", 00:11:22.224 "is_configured": true, 00:11:22.225 "data_offset": 2048, 00:11:22.225 "data_size": 63488 00:11:22.225 } 00:11:22.225 ] 00:11:22.225 }' 00:11:22.225 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.225 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.791 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.791 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:22.791 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.791 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.791 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.791 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:22.791 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.791 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:22.791 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.791 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1b81ec18-e183-4b06-af19-87eb7dd63ceb 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.791 [2024-12-05 20:04:24.074966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:22.791 [2024-12-05 20:04:24.075222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:22.791 [2024-12-05 20:04:24.075240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:22.791 [2024-12-05 20:04:24.075526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:22.791 [2024-12-05 20:04:24.075684] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:22.791 [2024-12-05 20:04:24.075695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:22.791 NewBaseBdev 00:11:22.791 [2024-12-05 20:04:24.075859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.791 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.791 [ 00:11:22.791 { 00:11:22.791 "name": "NewBaseBdev", 00:11:22.791 "aliases": [ 00:11:22.791 "1b81ec18-e183-4b06-af19-87eb7dd63ceb" 00:11:22.791 ], 00:11:22.791 "product_name": "Malloc disk", 00:11:22.791 "block_size": 512, 00:11:22.791 "num_blocks": 65536, 00:11:22.791 "uuid": "1b81ec18-e183-4b06-af19-87eb7dd63ceb", 00:11:22.791 "assigned_rate_limits": { 00:11:22.791 "rw_ios_per_sec": 0, 00:11:22.791 "rw_mbytes_per_sec": 0, 00:11:22.791 "r_mbytes_per_sec": 0, 00:11:22.791 "w_mbytes_per_sec": 0 00:11:22.791 }, 00:11:22.791 "claimed": true, 00:11:22.791 "claim_type": "exclusive_write", 00:11:22.792 "zoned": false, 00:11:22.792 "supported_io_types": { 00:11:22.792 "read": true, 00:11:22.792 "write": true, 00:11:22.792 "unmap": true, 00:11:22.792 "flush": true, 00:11:22.792 "reset": true, 00:11:22.792 "nvme_admin": false, 00:11:22.792 "nvme_io": false, 00:11:22.792 "nvme_io_md": false, 00:11:22.792 "write_zeroes": true, 00:11:22.792 "zcopy": true, 00:11:22.792 "get_zone_info": false, 00:11:22.792 "zone_management": false, 00:11:22.792 "zone_append": false, 00:11:22.792 "compare": false, 00:11:22.792 "compare_and_write": false, 00:11:22.792 "abort": true, 00:11:22.792 "seek_hole": false, 00:11:22.792 "seek_data": false, 00:11:22.792 "copy": true, 00:11:22.792 "nvme_iov_md": false 00:11:22.792 }, 00:11:22.792 "memory_domains": [ 00:11:22.792 { 00:11:22.792 "dma_device_id": "system", 00:11:22.792 "dma_device_type": 1 00:11:22.792 }, 00:11:22.792 { 00:11:22.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.792 "dma_device_type": 2 00:11:22.792 } 00:11:22.792 ], 00:11:22.792 "driver_specific": {} 00:11:22.792 } 00:11:22.792 ] 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.792 "name": "Existed_Raid", 00:11:22.792 "uuid": "da79d495-c3d7-4a71-9d6f-e78cb00421ed", 00:11:22.792 "strip_size_kb": 64, 00:11:22.792 "state": "online", 00:11:22.792 "raid_level": "concat", 00:11:22.792 "superblock": true, 00:11:22.792 "num_base_bdevs": 3, 00:11:22.792 "num_base_bdevs_discovered": 3, 00:11:22.792 "num_base_bdevs_operational": 3, 00:11:22.792 "base_bdevs_list": [ 00:11:22.792 { 00:11:22.792 "name": "NewBaseBdev", 00:11:22.792 "uuid": "1b81ec18-e183-4b06-af19-87eb7dd63ceb", 00:11:22.792 "is_configured": true, 00:11:22.792 "data_offset": 2048, 00:11:22.792 "data_size": 63488 00:11:22.792 }, 00:11:22.792 { 00:11:22.792 "name": "BaseBdev2", 00:11:22.792 "uuid": "ea5fad26-1387-4a7e-a4ae-5642a75ae855", 00:11:22.792 "is_configured": true, 00:11:22.792 "data_offset": 2048, 00:11:22.792 "data_size": 63488 00:11:22.792 }, 00:11:22.792 { 00:11:22.792 "name": "BaseBdev3", 00:11:22.792 "uuid": "4ab9fb57-4744-47ac-8df2-4905fb92f09c", 00:11:22.792 "is_configured": true, 00:11:22.792 "data_offset": 2048, 00:11:22.792 "data_size": 63488 00:11:22.792 } 00:11:22.792 ] 00:11:22.792 }' 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.792 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.358 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:23.358 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:23.358 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:23.358 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:23.358 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:23.358 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:23.358 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:23.358 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:23.358 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.358 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.358 [2024-12-05 20:04:24.614436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.358 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.359 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:23.359 "name": "Existed_Raid", 00:11:23.359 "aliases": [ 00:11:23.359 "da79d495-c3d7-4a71-9d6f-e78cb00421ed" 00:11:23.359 ], 00:11:23.359 "product_name": "Raid Volume", 00:11:23.359 "block_size": 512, 00:11:23.359 "num_blocks": 190464, 00:11:23.359 "uuid": "da79d495-c3d7-4a71-9d6f-e78cb00421ed", 00:11:23.359 "assigned_rate_limits": { 00:11:23.359 "rw_ios_per_sec": 0, 00:11:23.359 "rw_mbytes_per_sec": 0, 00:11:23.359 "r_mbytes_per_sec": 0, 00:11:23.359 "w_mbytes_per_sec": 0 00:11:23.359 }, 00:11:23.359 "claimed": false, 00:11:23.359 "zoned": false, 00:11:23.359 "supported_io_types": { 00:11:23.359 "read": true, 00:11:23.359 "write": true, 00:11:23.359 "unmap": true, 00:11:23.359 "flush": true, 00:11:23.359 "reset": true, 00:11:23.359 "nvme_admin": false, 00:11:23.359 "nvme_io": false, 00:11:23.359 "nvme_io_md": false, 00:11:23.359 "write_zeroes": true, 00:11:23.359 "zcopy": false, 00:11:23.359 "get_zone_info": false, 00:11:23.359 "zone_management": false, 00:11:23.359 "zone_append": false, 00:11:23.359 "compare": false, 00:11:23.359 "compare_and_write": false, 00:11:23.359 "abort": false, 00:11:23.359 "seek_hole": false, 00:11:23.359 "seek_data": false, 00:11:23.359 "copy": false, 00:11:23.359 "nvme_iov_md": false 00:11:23.359 }, 00:11:23.359 "memory_domains": [ 00:11:23.359 { 00:11:23.359 "dma_device_id": "system", 00:11:23.359 "dma_device_type": 1 00:11:23.359 }, 00:11:23.359 { 00:11:23.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.359 "dma_device_type": 2 00:11:23.359 }, 00:11:23.359 { 00:11:23.359 "dma_device_id": "system", 00:11:23.359 "dma_device_type": 1 00:11:23.359 }, 00:11:23.359 { 00:11:23.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.359 "dma_device_type": 2 00:11:23.359 }, 00:11:23.359 { 00:11:23.359 "dma_device_id": "system", 00:11:23.359 "dma_device_type": 1 00:11:23.359 }, 00:11:23.359 { 00:11:23.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.359 "dma_device_type": 2 00:11:23.359 } 00:11:23.359 ], 00:11:23.359 "driver_specific": { 00:11:23.359 "raid": { 00:11:23.359 "uuid": "da79d495-c3d7-4a71-9d6f-e78cb00421ed", 00:11:23.359 "strip_size_kb": 64, 00:11:23.359 "state": "online", 00:11:23.359 "raid_level": "concat", 00:11:23.359 "superblock": true, 00:11:23.359 "num_base_bdevs": 3, 00:11:23.359 "num_base_bdevs_discovered": 3, 00:11:23.359 "num_base_bdevs_operational": 3, 00:11:23.359 "base_bdevs_list": [ 00:11:23.359 { 00:11:23.359 "name": "NewBaseBdev", 00:11:23.359 "uuid": "1b81ec18-e183-4b06-af19-87eb7dd63ceb", 00:11:23.359 "is_configured": true, 00:11:23.359 "data_offset": 2048, 00:11:23.359 "data_size": 63488 00:11:23.359 }, 00:11:23.359 { 00:11:23.359 "name": "BaseBdev2", 00:11:23.359 "uuid": "ea5fad26-1387-4a7e-a4ae-5642a75ae855", 00:11:23.359 "is_configured": true, 00:11:23.359 "data_offset": 2048, 00:11:23.359 "data_size": 63488 00:11:23.359 }, 00:11:23.359 { 00:11:23.359 "name": "BaseBdev3", 00:11:23.359 "uuid": "4ab9fb57-4744-47ac-8df2-4905fb92f09c", 00:11:23.359 "is_configured": true, 00:11:23.359 "data_offset": 2048, 00:11:23.359 "data_size": 63488 00:11:23.359 } 00:11:23.359 ] 00:11:23.359 } 00:11:23.359 } 00:11:23.359 }' 00:11:23.359 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.359 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:23.359 BaseBdev2 00:11:23.359 BaseBdev3' 00:11:23.359 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.359 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:23.359 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.359 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:23.359 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.359 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.359 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.359 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.618 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.618 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.618 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.618 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:23.618 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.618 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.618 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.618 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.618 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.619 [2024-12-05 20:04:24.909634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.619 [2024-12-05 20:04:24.909664] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.619 [2024-12-05 20:04:24.909766] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.619 [2024-12-05 20:04:24.909822] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.619 [2024-12-05 20:04:24.909834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66358 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66358 ']' 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66358 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66358 00:11:23.619 killing process with pid 66358 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66358' 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66358 00:11:23.619 [2024-12-05 20:04:24.958386] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.619 20:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66358 00:11:23.890 [2024-12-05 20:04:25.272657] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:25.269 20:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:25.269 00:11:25.269 real 0m10.854s 00:11:25.269 user 0m17.276s 00:11:25.269 sys 0m1.826s 00:11:25.269 ************************************ 00:11:25.269 END TEST raid_state_function_test_sb 00:11:25.269 ************************************ 00:11:25.269 20:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.269 20:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.269 20:04:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:25.269 20:04:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:25.269 20:04:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.269 20:04:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:25.269 ************************************ 00:11:25.269 START TEST raid_superblock_test 00:11:25.269 ************************************ 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66978 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66978 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66978 ']' 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.269 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.269 [2024-12-05 20:04:26.605839] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:11:25.269 [2024-12-05 20:04:26.606083] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66978 ] 00:11:25.529 [2024-12-05 20:04:26.782053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.529 [2024-12-05 20:04:26.906738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.789 [2024-12-05 20:04:27.122798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.789 [2024-12-05 20:04:27.122867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.049 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.049 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:26.049 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:26.049 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:26.049 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:26.049 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:26.049 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:26.049 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:26.049 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:26.049 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:26.049 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:26.049 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.049 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.310 malloc1 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.310 [2024-12-05 20:04:27.504233] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:26.310 [2024-12-05 20:04:27.504364] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.310 [2024-12-05 20:04:27.504439] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:26.310 [2024-12-05 20:04:27.504487] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.310 [2024-12-05 20:04:27.506974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.310 [2024-12-05 20:04:27.507076] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:26.310 pt1 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.310 malloc2 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.310 [2024-12-05 20:04:27.566282] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:26.310 [2024-12-05 20:04:27.566350] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.310 [2024-12-05 20:04:27.566382] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:26.310 [2024-12-05 20:04:27.566393] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.310 [2024-12-05 20:04:27.568740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.310 [2024-12-05 20:04:27.568877] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:26.310 pt2 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.310 malloc3 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.310 [2024-12-05 20:04:27.633030] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:26.310 [2024-12-05 20:04:27.633144] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.310 [2024-12-05 20:04:27.633207] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:26.310 [2024-12-05 20:04:27.633249] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.310 [2024-12-05 20:04:27.635360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.310 [2024-12-05 20:04:27.635449] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:26.310 pt3 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.310 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.310 [2024-12-05 20:04:27.645063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:26.310 [2024-12-05 20:04:27.646977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:26.310 [2024-12-05 20:04:27.647050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:26.310 [2024-12-05 20:04:27.647244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:26.310 [2024-12-05 20:04:27.647258] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:26.310 [2024-12-05 20:04:27.647493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:26.310 [2024-12-05 20:04:27.647663] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:26.310 [2024-12-05 20:04:27.647671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:26.310 [2024-12-05 20:04:27.647812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.311 "name": "raid_bdev1", 00:11:26.311 "uuid": "0cbe5ad8-cbb6-4665-974f-8f3a10f2a19c", 00:11:26.311 "strip_size_kb": 64, 00:11:26.311 "state": "online", 00:11:26.311 "raid_level": "concat", 00:11:26.311 "superblock": true, 00:11:26.311 "num_base_bdevs": 3, 00:11:26.311 "num_base_bdevs_discovered": 3, 00:11:26.311 "num_base_bdevs_operational": 3, 00:11:26.311 "base_bdevs_list": [ 00:11:26.311 { 00:11:26.311 "name": "pt1", 00:11:26.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.311 "is_configured": true, 00:11:26.311 "data_offset": 2048, 00:11:26.311 "data_size": 63488 00:11:26.311 }, 00:11:26.311 { 00:11:26.311 "name": "pt2", 00:11:26.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.311 "is_configured": true, 00:11:26.311 "data_offset": 2048, 00:11:26.311 "data_size": 63488 00:11:26.311 }, 00:11:26.311 { 00:11:26.311 "name": "pt3", 00:11:26.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.311 "is_configured": true, 00:11:26.311 "data_offset": 2048, 00:11:26.311 "data_size": 63488 00:11:26.311 } 00:11:26.311 ] 00:11:26.311 }' 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.311 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.882 [2024-12-05 20:04:28.104649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.882 "name": "raid_bdev1", 00:11:26.882 "aliases": [ 00:11:26.882 "0cbe5ad8-cbb6-4665-974f-8f3a10f2a19c" 00:11:26.882 ], 00:11:26.882 "product_name": "Raid Volume", 00:11:26.882 "block_size": 512, 00:11:26.882 "num_blocks": 190464, 00:11:26.882 "uuid": "0cbe5ad8-cbb6-4665-974f-8f3a10f2a19c", 00:11:26.882 "assigned_rate_limits": { 00:11:26.882 "rw_ios_per_sec": 0, 00:11:26.882 "rw_mbytes_per_sec": 0, 00:11:26.882 "r_mbytes_per_sec": 0, 00:11:26.882 "w_mbytes_per_sec": 0 00:11:26.882 }, 00:11:26.882 "claimed": false, 00:11:26.882 "zoned": false, 00:11:26.882 "supported_io_types": { 00:11:26.882 "read": true, 00:11:26.882 "write": true, 00:11:26.882 "unmap": true, 00:11:26.882 "flush": true, 00:11:26.882 "reset": true, 00:11:26.882 "nvme_admin": false, 00:11:26.882 "nvme_io": false, 00:11:26.882 "nvme_io_md": false, 00:11:26.882 "write_zeroes": true, 00:11:26.882 "zcopy": false, 00:11:26.882 "get_zone_info": false, 00:11:26.882 "zone_management": false, 00:11:26.882 "zone_append": false, 00:11:26.882 "compare": false, 00:11:26.882 "compare_and_write": false, 00:11:26.882 "abort": false, 00:11:26.882 "seek_hole": false, 00:11:26.882 "seek_data": false, 00:11:26.882 "copy": false, 00:11:26.882 "nvme_iov_md": false 00:11:26.882 }, 00:11:26.882 "memory_domains": [ 00:11:26.882 { 00:11:26.882 "dma_device_id": "system", 00:11:26.882 "dma_device_type": 1 00:11:26.882 }, 00:11:26.882 { 00:11:26.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.882 "dma_device_type": 2 00:11:26.882 }, 00:11:26.882 { 00:11:26.882 "dma_device_id": "system", 00:11:26.882 "dma_device_type": 1 00:11:26.882 }, 00:11:26.882 { 00:11:26.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.882 "dma_device_type": 2 00:11:26.882 }, 00:11:26.882 { 00:11:26.882 "dma_device_id": "system", 00:11:26.882 "dma_device_type": 1 00:11:26.882 }, 00:11:26.882 { 00:11:26.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.882 "dma_device_type": 2 00:11:26.882 } 00:11:26.882 ], 00:11:26.882 "driver_specific": { 00:11:26.882 "raid": { 00:11:26.882 "uuid": "0cbe5ad8-cbb6-4665-974f-8f3a10f2a19c", 00:11:26.882 "strip_size_kb": 64, 00:11:26.882 "state": "online", 00:11:26.882 "raid_level": "concat", 00:11:26.882 "superblock": true, 00:11:26.882 "num_base_bdevs": 3, 00:11:26.882 "num_base_bdevs_discovered": 3, 00:11:26.882 "num_base_bdevs_operational": 3, 00:11:26.882 "base_bdevs_list": [ 00:11:26.882 { 00:11:26.882 "name": "pt1", 00:11:26.882 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.882 "is_configured": true, 00:11:26.882 "data_offset": 2048, 00:11:26.882 "data_size": 63488 00:11:26.882 }, 00:11:26.882 { 00:11:26.882 "name": "pt2", 00:11:26.882 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.882 "is_configured": true, 00:11:26.882 "data_offset": 2048, 00:11:26.882 "data_size": 63488 00:11:26.882 }, 00:11:26.882 { 00:11:26.882 "name": "pt3", 00:11:26.882 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.882 "is_configured": true, 00:11:26.882 "data_offset": 2048, 00:11:26.882 "data_size": 63488 00:11:26.882 } 00:11:26.882 ] 00:11:26.882 } 00:11:26.882 } 00:11:26.882 }' 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:26.882 pt2 00:11:26.882 pt3' 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.882 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.142 [2024-12-05 20:04:28.368114] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0cbe5ad8-cbb6-4665-974f-8f3a10f2a19c 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0cbe5ad8-cbb6-4665-974f-8f3a10f2a19c ']' 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.142 [2024-12-05 20:04:28.411739] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:27.142 [2024-12-05 20:04:28.411813] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:27.142 [2024-12-05 20:04:28.411935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.142 [2024-12-05 20:04:28.412039] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.142 [2024-12-05 20:04:28.412108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.142 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.143 [2024-12-05 20:04:28.567567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:27.143 [2024-12-05 20:04:28.569637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:27.143 [2024-12-05 20:04:28.569750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:27.143 [2024-12-05 20:04:28.569855] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:27.143 [2024-12-05 20:04:28.570009] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:27.143 [2024-12-05 20:04:28.570100] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:27.143 [2024-12-05 20:04:28.570183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:27.143 [2024-12-05 20:04:28.570226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:27.143 request: 00:11:27.143 { 00:11:27.143 "name": "raid_bdev1", 00:11:27.143 "raid_level": "concat", 00:11:27.143 "base_bdevs": [ 00:11:27.143 "malloc1", 00:11:27.143 "malloc2", 00:11:27.143 "malloc3" 00:11:27.143 ], 00:11:27.143 "strip_size_kb": 64, 00:11:27.143 "superblock": false, 00:11:27.143 "method": "bdev_raid_create", 00:11:27.143 "req_id": 1 00:11:27.143 } 00:11:27.143 Got JSON-RPC error response 00:11:27.143 response: 00:11:27.143 { 00:11:27.143 "code": -17, 00:11:27.143 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:27.143 } 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:27.143 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.402 [2024-12-05 20:04:28.635398] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:27.402 [2024-12-05 20:04:28.635517] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.402 [2024-12-05 20:04:28.635571] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:27.402 [2024-12-05 20:04:28.635611] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.402 [2024-12-05 20:04:28.637964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.402 [2024-12-05 20:04:28.638053] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:27.402 [2024-12-05 20:04:28.638201] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:27.402 [2024-12-05 20:04:28.638306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:27.402 pt1 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.402 "name": "raid_bdev1", 00:11:27.402 "uuid": "0cbe5ad8-cbb6-4665-974f-8f3a10f2a19c", 00:11:27.402 "strip_size_kb": 64, 00:11:27.402 "state": "configuring", 00:11:27.402 "raid_level": "concat", 00:11:27.402 "superblock": true, 00:11:27.402 "num_base_bdevs": 3, 00:11:27.402 "num_base_bdevs_discovered": 1, 00:11:27.402 "num_base_bdevs_operational": 3, 00:11:27.402 "base_bdevs_list": [ 00:11:27.402 { 00:11:27.402 "name": "pt1", 00:11:27.402 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.402 "is_configured": true, 00:11:27.402 "data_offset": 2048, 00:11:27.402 "data_size": 63488 00:11:27.402 }, 00:11:27.402 { 00:11:27.402 "name": null, 00:11:27.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.402 "is_configured": false, 00:11:27.402 "data_offset": 2048, 00:11:27.402 "data_size": 63488 00:11:27.402 }, 00:11:27.402 { 00:11:27.402 "name": null, 00:11:27.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.402 "is_configured": false, 00:11:27.402 "data_offset": 2048, 00:11:27.402 "data_size": 63488 00:11:27.402 } 00:11:27.402 ] 00:11:27.402 }' 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.402 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.661 [2024-12-05 20:04:29.070641] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:27.661 [2024-12-05 20:04:29.070780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.661 [2024-12-05 20:04:29.070850] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:27.661 [2024-12-05 20:04:29.070899] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.661 [2024-12-05 20:04:29.071387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.661 [2024-12-05 20:04:29.071450] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:27.661 [2024-12-05 20:04:29.071594] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:27.661 [2024-12-05 20:04:29.071667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:27.661 pt2 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.661 [2024-12-05 20:04:29.078619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.661 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.924 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.924 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.924 "name": "raid_bdev1", 00:11:27.924 "uuid": "0cbe5ad8-cbb6-4665-974f-8f3a10f2a19c", 00:11:27.924 "strip_size_kb": 64, 00:11:27.924 "state": "configuring", 00:11:27.924 "raid_level": "concat", 00:11:27.924 "superblock": true, 00:11:27.924 "num_base_bdevs": 3, 00:11:27.924 "num_base_bdevs_discovered": 1, 00:11:27.924 "num_base_bdevs_operational": 3, 00:11:27.924 "base_bdevs_list": [ 00:11:27.924 { 00:11:27.924 "name": "pt1", 00:11:27.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.924 "is_configured": true, 00:11:27.924 "data_offset": 2048, 00:11:27.924 "data_size": 63488 00:11:27.924 }, 00:11:27.924 { 00:11:27.924 "name": null, 00:11:27.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.924 "is_configured": false, 00:11:27.924 "data_offset": 0, 00:11:27.924 "data_size": 63488 00:11:27.924 }, 00:11:27.924 { 00:11:27.924 "name": null, 00:11:27.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.924 "is_configured": false, 00:11:27.924 "data_offset": 2048, 00:11:27.924 "data_size": 63488 00:11:27.924 } 00:11:27.924 ] 00:11:27.924 }' 00:11:27.924 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.924 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.205 [2024-12-05 20:04:29.541857] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:28.205 [2024-12-05 20:04:29.541937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.205 [2024-12-05 20:04:29.541957] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:28.205 [2024-12-05 20:04:29.541967] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.205 [2024-12-05 20:04:29.542453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.205 [2024-12-05 20:04:29.542491] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:28.205 [2024-12-05 20:04:29.542580] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:28.205 [2024-12-05 20:04:29.542619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:28.205 pt2 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.205 [2024-12-05 20:04:29.549810] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:28.205 [2024-12-05 20:04:29.549864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.205 [2024-12-05 20:04:29.549880] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:28.205 [2024-12-05 20:04:29.549899] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.205 [2024-12-05 20:04:29.550307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.205 [2024-12-05 20:04:29.550345] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:28.205 [2024-12-05 20:04:29.550411] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:28.205 [2024-12-05 20:04:29.550433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:28.205 [2024-12-05 20:04:29.550572] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:28.205 [2024-12-05 20:04:29.550590] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:28.205 [2024-12-05 20:04:29.550834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:28.205 [2024-12-05 20:04:29.550997] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:28.205 [2024-12-05 20:04:29.551005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:28.205 [2024-12-05 20:04:29.551155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.205 pt3 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.205 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.205 "name": "raid_bdev1", 00:11:28.205 "uuid": "0cbe5ad8-cbb6-4665-974f-8f3a10f2a19c", 00:11:28.205 "strip_size_kb": 64, 00:11:28.205 "state": "online", 00:11:28.205 "raid_level": "concat", 00:11:28.205 "superblock": true, 00:11:28.206 "num_base_bdevs": 3, 00:11:28.206 "num_base_bdevs_discovered": 3, 00:11:28.206 "num_base_bdevs_operational": 3, 00:11:28.206 "base_bdevs_list": [ 00:11:28.206 { 00:11:28.206 "name": "pt1", 00:11:28.206 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:28.206 "is_configured": true, 00:11:28.206 "data_offset": 2048, 00:11:28.206 "data_size": 63488 00:11:28.206 }, 00:11:28.206 { 00:11:28.206 "name": "pt2", 00:11:28.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.206 "is_configured": true, 00:11:28.206 "data_offset": 2048, 00:11:28.206 "data_size": 63488 00:11:28.206 }, 00:11:28.206 { 00:11:28.206 "name": "pt3", 00:11:28.206 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.206 "is_configured": true, 00:11:28.206 "data_offset": 2048, 00:11:28.206 "data_size": 63488 00:11:28.206 } 00:11:28.206 ] 00:11:28.206 }' 00:11:28.206 20:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.206 20:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.795 [2024-12-05 20:04:30.049387] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:28.795 "name": "raid_bdev1", 00:11:28.795 "aliases": [ 00:11:28.795 "0cbe5ad8-cbb6-4665-974f-8f3a10f2a19c" 00:11:28.795 ], 00:11:28.795 "product_name": "Raid Volume", 00:11:28.795 "block_size": 512, 00:11:28.795 "num_blocks": 190464, 00:11:28.795 "uuid": "0cbe5ad8-cbb6-4665-974f-8f3a10f2a19c", 00:11:28.795 "assigned_rate_limits": { 00:11:28.795 "rw_ios_per_sec": 0, 00:11:28.795 "rw_mbytes_per_sec": 0, 00:11:28.795 "r_mbytes_per_sec": 0, 00:11:28.795 "w_mbytes_per_sec": 0 00:11:28.795 }, 00:11:28.795 "claimed": false, 00:11:28.795 "zoned": false, 00:11:28.795 "supported_io_types": { 00:11:28.795 "read": true, 00:11:28.795 "write": true, 00:11:28.795 "unmap": true, 00:11:28.795 "flush": true, 00:11:28.795 "reset": true, 00:11:28.795 "nvme_admin": false, 00:11:28.795 "nvme_io": false, 00:11:28.795 "nvme_io_md": false, 00:11:28.795 "write_zeroes": true, 00:11:28.795 "zcopy": false, 00:11:28.795 "get_zone_info": false, 00:11:28.795 "zone_management": false, 00:11:28.795 "zone_append": false, 00:11:28.795 "compare": false, 00:11:28.795 "compare_and_write": false, 00:11:28.795 "abort": false, 00:11:28.795 "seek_hole": false, 00:11:28.795 "seek_data": false, 00:11:28.795 "copy": false, 00:11:28.795 "nvme_iov_md": false 00:11:28.795 }, 00:11:28.795 "memory_domains": [ 00:11:28.795 { 00:11:28.795 "dma_device_id": "system", 00:11:28.795 "dma_device_type": 1 00:11:28.795 }, 00:11:28.795 { 00:11:28.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.795 "dma_device_type": 2 00:11:28.795 }, 00:11:28.795 { 00:11:28.795 "dma_device_id": "system", 00:11:28.795 "dma_device_type": 1 00:11:28.795 }, 00:11:28.795 { 00:11:28.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.795 "dma_device_type": 2 00:11:28.795 }, 00:11:28.795 { 00:11:28.795 "dma_device_id": "system", 00:11:28.795 "dma_device_type": 1 00:11:28.795 }, 00:11:28.795 { 00:11:28.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.795 "dma_device_type": 2 00:11:28.795 } 00:11:28.795 ], 00:11:28.795 "driver_specific": { 00:11:28.795 "raid": { 00:11:28.795 "uuid": "0cbe5ad8-cbb6-4665-974f-8f3a10f2a19c", 00:11:28.795 "strip_size_kb": 64, 00:11:28.795 "state": "online", 00:11:28.795 "raid_level": "concat", 00:11:28.795 "superblock": true, 00:11:28.795 "num_base_bdevs": 3, 00:11:28.795 "num_base_bdevs_discovered": 3, 00:11:28.795 "num_base_bdevs_operational": 3, 00:11:28.795 "base_bdevs_list": [ 00:11:28.795 { 00:11:28.795 "name": "pt1", 00:11:28.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:28.795 "is_configured": true, 00:11:28.795 "data_offset": 2048, 00:11:28.795 "data_size": 63488 00:11:28.795 }, 00:11:28.795 { 00:11:28.795 "name": "pt2", 00:11:28.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.795 "is_configured": true, 00:11:28.795 "data_offset": 2048, 00:11:28.795 "data_size": 63488 00:11:28.795 }, 00:11:28.795 { 00:11:28.795 "name": "pt3", 00:11:28.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.795 "is_configured": true, 00:11:28.795 "data_offset": 2048, 00:11:28.795 "data_size": 63488 00:11:28.795 } 00:11:28.795 ] 00:11:28.795 } 00:11:28.795 } 00:11:28.795 }' 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:28.795 pt2 00:11:28.795 pt3' 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.795 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:29.056 [2024-12-05 20:04:30.348775] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0cbe5ad8-cbb6-4665-974f-8f3a10f2a19c '!=' 0cbe5ad8-cbb6-4665-974f-8f3a10f2a19c ']' 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66978 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66978 ']' 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66978 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66978 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66978' 00:11:29.056 killing process with pid 66978 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66978 00:11:29.056 [2024-12-05 20:04:30.422970] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.056 [2024-12-05 20:04:30.423126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.056 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66978 00:11:29.056 [2024-12-05 20:04:30.423233] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.056 [2024-12-05 20:04:30.423250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:29.316 [2024-12-05 20:04:30.737119] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:30.696 20:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:30.696 00:11:30.696 real 0m5.376s 00:11:30.696 user 0m7.720s 00:11:30.696 sys 0m0.915s 00:11:30.696 20:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.696 20:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.696 ************************************ 00:11:30.696 END TEST raid_superblock_test 00:11:30.696 ************************************ 00:11:30.696 20:04:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:30.696 20:04:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:30.696 20:04:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.696 20:04:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:30.696 ************************************ 00:11:30.696 START TEST raid_read_error_test 00:11:30.696 ************************************ 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1IMgmJ76Gx 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67237 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:30.696 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67237 00:11:30.697 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67237 ']' 00:11:30.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.697 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.697 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.697 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.697 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.697 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.697 [2024-12-05 20:04:32.066650] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:11:30.697 [2024-12-05 20:04:32.066783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67237 ] 00:11:30.955 [2024-12-05 20:04:32.238692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.955 [2024-12-05 20:04:32.356253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.219 [2024-12-05 20:04:32.549211] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.219 [2024-12-05 20:04:32.549246] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.478 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.478 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:31.478 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:31.478 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:31.478 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.478 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.738 BaseBdev1_malloc 00:11:31.738 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.738 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:31.738 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.738 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.738 true 00:11:31.738 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.738 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:31.738 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.738 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.738 [2024-12-05 20:04:32.965886] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:31.738 [2024-12-05 20:04:32.965952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.738 [2024-12-05 20:04:32.965974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:31.738 [2024-12-05 20:04:32.965985] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.738 [2024-12-05 20:04:32.968310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.738 [2024-12-05 20:04:32.968446] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:31.738 BaseBdev1 00:11:31.738 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.738 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:31.738 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:31.738 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.738 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.738 BaseBdev2_malloc 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.738 true 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.738 [2024-12-05 20:04:33.031557] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:31.738 [2024-12-05 20:04:33.031612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.738 [2024-12-05 20:04:33.031629] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:31.738 [2024-12-05 20:04:33.031638] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.738 [2024-12-05 20:04:33.033725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.738 [2024-12-05 20:04:33.033845] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:31.738 BaseBdev2 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.738 BaseBdev3_malloc 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.738 true 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.738 [2024-12-05 20:04:33.110565] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:31.738 [2024-12-05 20:04:33.110618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.738 [2024-12-05 20:04:33.110635] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:31.738 [2024-12-05 20:04:33.110645] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.738 [2024-12-05 20:04:33.112736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.738 [2024-12-05 20:04:33.112862] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:31.738 BaseBdev3 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.738 [2024-12-05 20:04:33.122624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.738 [2024-12-05 20:04:33.124462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.738 [2024-12-05 20:04:33.124533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.738 [2024-12-05 20:04:33.124723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:31.738 [2024-12-05 20:04:33.124735] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:31.738 [2024-12-05 20:04:33.124999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:31.738 [2024-12-05 20:04:33.125165] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:31.738 [2024-12-05 20:04:33.125191] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:31.738 [2024-12-05 20:04:33.125339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.738 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.997 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.997 "name": "raid_bdev1", 00:11:31.997 "uuid": "2b20ab76-7b05-460b-a301-5b3f4a51654c", 00:11:31.997 "strip_size_kb": 64, 00:11:31.997 "state": "online", 00:11:31.997 "raid_level": "concat", 00:11:31.997 "superblock": true, 00:11:31.997 "num_base_bdevs": 3, 00:11:31.997 "num_base_bdevs_discovered": 3, 00:11:31.997 "num_base_bdevs_operational": 3, 00:11:31.997 "base_bdevs_list": [ 00:11:31.997 { 00:11:31.997 "name": "BaseBdev1", 00:11:31.997 "uuid": "76383613-0ee4-55cb-804e-f4d70e29ce60", 00:11:31.997 "is_configured": true, 00:11:31.997 "data_offset": 2048, 00:11:31.997 "data_size": 63488 00:11:31.997 }, 00:11:31.997 { 00:11:31.997 "name": "BaseBdev2", 00:11:31.997 "uuid": "92aff52f-1759-5e3e-9744-ee25e9a2ee0f", 00:11:31.997 "is_configured": true, 00:11:31.997 "data_offset": 2048, 00:11:31.997 "data_size": 63488 00:11:31.997 }, 00:11:31.997 { 00:11:31.997 "name": "BaseBdev3", 00:11:31.997 "uuid": "38bb2da0-9b35-5fa0-b30a-1248f97816e5", 00:11:31.997 "is_configured": true, 00:11:31.997 "data_offset": 2048, 00:11:31.997 "data_size": 63488 00:11:31.997 } 00:11:31.997 ] 00:11:31.997 }' 00:11:31.997 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.997 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.254 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:32.254 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:32.254 [2024-12-05 20:04:33.666928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:33.188 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:33.188 20:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.188 20:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.188 20:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.188 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:33.188 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:33.188 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:33.188 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:33.188 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.188 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.188 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.188 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.188 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.188 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.188 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.188 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.188 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.189 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.189 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.189 20:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.189 20:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.447 20:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.447 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.447 "name": "raid_bdev1", 00:11:33.447 "uuid": "2b20ab76-7b05-460b-a301-5b3f4a51654c", 00:11:33.447 "strip_size_kb": 64, 00:11:33.447 "state": "online", 00:11:33.447 "raid_level": "concat", 00:11:33.447 "superblock": true, 00:11:33.447 "num_base_bdevs": 3, 00:11:33.447 "num_base_bdevs_discovered": 3, 00:11:33.447 "num_base_bdevs_operational": 3, 00:11:33.447 "base_bdevs_list": [ 00:11:33.447 { 00:11:33.447 "name": "BaseBdev1", 00:11:33.447 "uuid": "76383613-0ee4-55cb-804e-f4d70e29ce60", 00:11:33.447 "is_configured": true, 00:11:33.447 "data_offset": 2048, 00:11:33.447 "data_size": 63488 00:11:33.447 }, 00:11:33.447 { 00:11:33.447 "name": "BaseBdev2", 00:11:33.447 "uuid": "92aff52f-1759-5e3e-9744-ee25e9a2ee0f", 00:11:33.447 "is_configured": true, 00:11:33.447 "data_offset": 2048, 00:11:33.447 "data_size": 63488 00:11:33.447 }, 00:11:33.447 { 00:11:33.447 "name": "BaseBdev3", 00:11:33.447 "uuid": "38bb2da0-9b35-5fa0-b30a-1248f97816e5", 00:11:33.447 "is_configured": true, 00:11:33.447 "data_offset": 2048, 00:11:33.447 "data_size": 63488 00:11:33.447 } 00:11:33.447 ] 00:11:33.447 }' 00:11:33.447 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.447 20:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.706 20:04:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:33.706 20:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.706 20:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.706 [2024-12-05 20:04:35.094917] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:33.706 [2024-12-05 20:04:35.095017] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.706 [2024-12-05 20:04:35.098067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.706 [2024-12-05 20:04:35.098168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.706 [2024-12-05 20:04:35.098250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.706 [2024-12-05 20:04:35.098310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:33.706 { 00:11:33.706 "results": [ 00:11:33.706 { 00:11:33.706 "job": "raid_bdev1", 00:11:33.706 "core_mask": "0x1", 00:11:33.706 "workload": "randrw", 00:11:33.706 "percentage": 50, 00:11:33.706 "status": "finished", 00:11:33.706 "queue_depth": 1, 00:11:33.706 "io_size": 131072, 00:11:33.706 "runtime": 1.429151, 00:11:33.706 "iops": 15315.386547677606, 00:11:33.706 "mibps": 1914.4233184597008, 00:11:33.706 "io_failed": 1, 00:11:33.706 "io_timeout": 0, 00:11:33.706 "avg_latency_us": 90.53139338795722, 00:11:33.706 "min_latency_us": 26.494323144104804, 00:11:33.706 "max_latency_us": 1452.380786026201 00:11:33.706 } 00:11:33.706 ], 00:11:33.706 "core_count": 1 00:11:33.706 } 00:11:33.706 20:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.706 20:04:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67237 00:11:33.706 20:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67237 ']' 00:11:33.706 20:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67237 00:11:33.706 20:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:33.706 20:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.706 20:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67237 00:11:33.965 20:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.965 20:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.965 20:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67237' 00:11:33.965 killing process with pid 67237 00:11:33.965 20:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67237 00:11:33.965 [2024-12-05 20:04:35.144290] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.965 20:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67237 00:11:33.965 [2024-12-05 20:04:35.378721] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.344 20:04:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1IMgmJ76Gx 00:11:35.344 20:04:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:35.344 20:04:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:35.344 20:04:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:35.344 20:04:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:35.344 20:04:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:35.344 20:04:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:35.344 ************************************ 00:11:35.344 END TEST raid_read_error_test 00:11:35.344 ************************************ 00:11:35.344 20:04:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:35.344 00:11:35.344 real 0m4.611s 00:11:35.344 user 0m5.505s 00:11:35.344 sys 0m0.558s 00:11:35.344 20:04:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.344 20:04:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.344 20:04:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:35.344 20:04:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:35.344 20:04:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.344 20:04:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.344 ************************************ 00:11:35.344 START TEST raid_write_error_test 00:11:35.344 ************************************ 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FUVf71eYzt 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67381 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67381 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67381 ']' 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.344 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.344 [2024-12-05 20:04:36.740764] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:11:35.344 [2024-12-05 20:04:36.740983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67381 ] 00:11:35.604 [2024-12-05 20:04:36.915784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.604 [2024-12-05 20:04:37.029546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.865 [2024-12-05 20:04:37.229304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.865 [2024-12-05 20:04:37.229446] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.476 BaseBdev1_malloc 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.476 true 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.476 [2024-12-05 20:04:37.648650] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:36.476 [2024-12-05 20:04:37.648707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.476 [2024-12-05 20:04:37.648727] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:36.476 [2024-12-05 20:04:37.648737] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.476 [2024-12-05 20:04:37.650890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.476 [2024-12-05 20:04:37.650950] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:36.476 BaseBdev1 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.476 BaseBdev2_malloc 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.476 true 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.476 [2024-12-05 20:04:37.715031] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:36.476 [2024-12-05 20:04:37.715089] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.476 [2024-12-05 20:04:37.715108] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:36.476 [2024-12-05 20:04:37.715118] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.476 [2024-12-05 20:04:37.717403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.476 [2024-12-05 20:04:37.717447] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:36.476 BaseBdev2 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.476 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.477 BaseBdev3_malloc 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.477 true 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.477 [2024-12-05 20:04:37.793276] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:36.477 [2024-12-05 20:04:37.793328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.477 [2024-12-05 20:04:37.793347] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:36.477 [2024-12-05 20:04:37.793358] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.477 [2024-12-05 20:04:37.795400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.477 [2024-12-05 20:04:37.795440] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:36.477 BaseBdev3 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.477 [2024-12-05 20:04:37.805354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.477 [2024-12-05 20:04:37.807267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.477 [2024-12-05 20:04:37.807340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.477 [2024-12-05 20:04:37.807541] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:36.477 [2024-12-05 20:04:37.807554] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:36.477 [2024-12-05 20:04:37.807787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:36.477 [2024-12-05 20:04:37.807954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:36.477 [2024-12-05 20:04:37.807969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:36.477 [2024-12-05 20:04:37.808126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.477 "name": "raid_bdev1", 00:11:36.477 "uuid": "069eafc6-8aa3-4cc5-8cae-42a40c254be7", 00:11:36.477 "strip_size_kb": 64, 00:11:36.477 "state": "online", 00:11:36.477 "raid_level": "concat", 00:11:36.477 "superblock": true, 00:11:36.477 "num_base_bdevs": 3, 00:11:36.477 "num_base_bdevs_discovered": 3, 00:11:36.477 "num_base_bdevs_operational": 3, 00:11:36.477 "base_bdevs_list": [ 00:11:36.477 { 00:11:36.477 "name": "BaseBdev1", 00:11:36.477 "uuid": "0072de12-539f-5d96-a4d4-a6e11fd05a2f", 00:11:36.477 "is_configured": true, 00:11:36.477 "data_offset": 2048, 00:11:36.477 "data_size": 63488 00:11:36.477 }, 00:11:36.477 { 00:11:36.477 "name": "BaseBdev2", 00:11:36.477 "uuid": "fecb309c-bc85-57b3-b2d7-638a26d9844a", 00:11:36.477 "is_configured": true, 00:11:36.477 "data_offset": 2048, 00:11:36.477 "data_size": 63488 00:11:36.477 }, 00:11:36.477 { 00:11:36.477 "name": "BaseBdev3", 00:11:36.477 "uuid": "df7d026c-61a5-55de-b216-8ef42c1b1738", 00:11:36.477 "is_configured": true, 00:11:36.477 "data_offset": 2048, 00:11:36.477 "data_size": 63488 00:11:36.477 } 00:11:36.477 ] 00:11:36.477 }' 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.477 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.046 20:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:37.046 20:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:37.046 [2024-12-05 20:04:38.393615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.985 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.985 "name": "raid_bdev1", 00:11:37.985 "uuid": "069eafc6-8aa3-4cc5-8cae-42a40c254be7", 00:11:37.986 "strip_size_kb": 64, 00:11:37.986 "state": "online", 00:11:37.986 "raid_level": "concat", 00:11:37.986 "superblock": true, 00:11:37.986 "num_base_bdevs": 3, 00:11:37.986 "num_base_bdevs_discovered": 3, 00:11:37.986 "num_base_bdevs_operational": 3, 00:11:37.986 "base_bdevs_list": [ 00:11:37.986 { 00:11:37.986 "name": "BaseBdev1", 00:11:37.986 "uuid": "0072de12-539f-5d96-a4d4-a6e11fd05a2f", 00:11:37.986 "is_configured": true, 00:11:37.986 "data_offset": 2048, 00:11:37.986 "data_size": 63488 00:11:37.986 }, 00:11:37.986 { 00:11:37.986 "name": "BaseBdev2", 00:11:37.986 "uuid": "fecb309c-bc85-57b3-b2d7-638a26d9844a", 00:11:37.986 "is_configured": true, 00:11:37.986 "data_offset": 2048, 00:11:37.986 "data_size": 63488 00:11:37.986 }, 00:11:37.986 { 00:11:37.986 "name": "BaseBdev3", 00:11:37.986 "uuid": "df7d026c-61a5-55de-b216-8ef42c1b1738", 00:11:37.986 "is_configured": true, 00:11:37.986 "data_offset": 2048, 00:11:37.986 "data_size": 63488 00:11:37.986 } 00:11:37.986 ] 00:11:37.986 }' 00:11:37.986 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.986 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.555 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:38.555 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.555 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.555 [2024-12-05 20:04:39.741902] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.555 [2024-12-05 20:04:39.741936] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.555 [2024-12-05 20:04:39.745029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.555 [2024-12-05 20:04:39.745082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.555 [2024-12-05 20:04:39.745124] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.555 [2024-12-05 20:04:39.745134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:38.555 { 00:11:38.555 "results": [ 00:11:38.555 { 00:11:38.555 "job": "raid_bdev1", 00:11:38.555 "core_mask": "0x1", 00:11:38.555 "workload": "randrw", 00:11:38.555 "percentage": 50, 00:11:38.555 "status": "finished", 00:11:38.555 "queue_depth": 1, 00:11:38.555 "io_size": 131072, 00:11:38.555 "runtime": 1.349037, 00:11:38.555 "iops": 14687.514130450092, 00:11:38.555 "mibps": 1835.9392663062615, 00:11:38.555 "io_failed": 1, 00:11:38.555 "io_timeout": 0, 00:11:38.555 "avg_latency_us": 94.31686347623817, 00:11:38.555 "min_latency_us": 27.053275109170304, 00:11:38.555 "max_latency_us": 1402.2986899563318 00:11:38.555 } 00:11:38.555 ], 00:11:38.555 "core_count": 1 00:11:38.555 } 00:11:38.555 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.555 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67381 00:11:38.555 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67381 ']' 00:11:38.555 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67381 00:11:38.555 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:38.555 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.555 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67381 00:11:38.555 killing process with pid 67381 00:11:38.555 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.555 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.555 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67381' 00:11:38.555 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67381 00:11:38.555 [2024-12-05 20:04:39.788385] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.555 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67381 00:11:38.814 [2024-12-05 20:04:40.030112] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:40.196 20:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FUVf71eYzt 00:11:40.196 20:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:40.196 20:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:40.196 20:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:40.196 20:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:40.196 20:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:40.196 20:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:40.196 20:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:40.196 00:11:40.196 real 0m4.595s 00:11:40.196 user 0m5.464s 00:11:40.196 sys 0m0.580s 00:11:40.196 ************************************ 00:11:40.196 END TEST raid_write_error_test 00:11:40.196 ************************************ 00:11:40.196 20:04:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.196 20:04:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.196 20:04:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:40.196 20:04:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:40.196 20:04:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:40.196 20:04:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.196 20:04:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:40.196 ************************************ 00:11:40.196 START TEST raid_state_function_test 00:11:40.196 ************************************ 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67526 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67526' 00:11:40.196 Process raid pid: 67526 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67526 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67526 ']' 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.196 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.196 [2024-12-05 20:04:41.406401] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:11:40.196 [2024-12-05 20:04:41.406605] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.196 [2024-12-05 20:04:41.579299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.457 [2024-12-05 20:04:41.703249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.716 [2024-12-05 20:04:41.904658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.716 [2024-12-05 20:04:41.904796] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.976 [2024-12-05 20:04:42.249637] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.976 [2024-12-05 20:04:42.249763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.976 [2024-12-05 20:04:42.249790] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:40.976 [2024-12-05 20:04:42.249819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:40.976 [2024-12-05 20:04:42.249826] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:40.976 [2024-12-05 20:04:42.249835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.976 "name": "Existed_Raid", 00:11:40.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.976 "strip_size_kb": 0, 00:11:40.976 "state": "configuring", 00:11:40.976 "raid_level": "raid1", 00:11:40.976 "superblock": false, 00:11:40.976 "num_base_bdevs": 3, 00:11:40.976 "num_base_bdevs_discovered": 0, 00:11:40.976 "num_base_bdevs_operational": 3, 00:11:40.976 "base_bdevs_list": [ 00:11:40.976 { 00:11:40.976 "name": "BaseBdev1", 00:11:40.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.976 "is_configured": false, 00:11:40.976 "data_offset": 0, 00:11:40.976 "data_size": 0 00:11:40.976 }, 00:11:40.976 { 00:11:40.976 "name": "BaseBdev2", 00:11:40.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.976 "is_configured": false, 00:11:40.976 "data_offset": 0, 00:11:40.976 "data_size": 0 00:11:40.976 }, 00:11:40.976 { 00:11:40.976 "name": "BaseBdev3", 00:11:40.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.976 "is_configured": false, 00:11:40.976 "data_offset": 0, 00:11:40.976 "data_size": 0 00:11:40.976 } 00:11:40.976 ] 00:11:40.976 }' 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.976 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.546 [2024-12-05 20:04:42.736758] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.546 [2024-12-05 20:04:42.736870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.546 [2024-12-05 20:04:42.744721] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:41.546 [2024-12-05 20:04:42.744818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:41.546 [2024-12-05 20:04:42.744885] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.546 [2024-12-05 20:04:42.744933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.546 [2024-12-05 20:04:42.744989] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:41.546 [2024-12-05 20:04:42.745038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.546 [2024-12-05 20:04:42.789444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.546 BaseBdev1 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.546 [ 00:11:41.546 { 00:11:41.546 "name": "BaseBdev1", 00:11:41.546 "aliases": [ 00:11:41.546 "6eaa22f5-58e9-405c-91c3-b3a2b7bbab0c" 00:11:41.546 ], 00:11:41.546 "product_name": "Malloc disk", 00:11:41.546 "block_size": 512, 00:11:41.546 "num_blocks": 65536, 00:11:41.546 "uuid": "6eaa22f5-58e9-405c-91c3-b3a2b7bbab0c", 00:11:41.546 "assigned_rate_limits": { 00:11:41.546 "rw_ios_per_sec": 0, 00:11:41.546 "rw_mbytes_per_sec": 0, 00:11:41.546 "r_mbytes_per_sec": 0, 00:11:41.546 "w_mbytes_per_sec": 0 00:11:41.546 }, 00:11:41.546 "claimed": true, 00:11:41.546 "claim_type": "exclusive_write", 00:11:41.546 "zoned": false, 00:11:41.546 "supported_io_types": { 00:11:41.546 "read": true, 00:11:41.546 "write": true, 00:11:41.546 "unmap": true, 00:11:41.546 "flush": true, 00:11:41.546 "reset": true, 00:11:41.546 "nvme_admin": false, 00:11:41.546 "nvme_io": false, 00:11:41.546 "nvme_io_md": false, 00:11:41.546 "write_zeroes": true, 00:11:41.546 "zcopy": true, 00:11:41.546 "get_zone_info": false, 00:11:41.546 "zone_management": false, 00:11:41.546 "zone_append": false, 00:11:41.546 "compare": false, 00:11:41.546 "compare_and_write": false, 00:11:41.546 "abort": true, 00:11:41.546 "seek_hole": false, 00:11:41.546 "seek_data": false, 00:11:41.546 "copy": true, 00:11:41.546 "nvme_iov_md": false 00:11:41.546 }, 00:11:41.546 "memory_domains": [ 00:11:41.546 { 00:11:41.546 "dma_device_id": "system", 00:11:41.546 "dma_device_type": 1 00:11:41.546 }, 00:11:41.546 { 00:11:41.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.546 "dma_device_type": 2 00:11:41.546 } 00:11:41.546 ], 00:11:41.546 "driver_specific": {} 00:11:41.546 } 00:11:41.546 ] 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.546 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.547 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.547 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.547 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.547 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.547 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.547 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.547 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.547 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.547 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.547 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.547 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.547 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.547 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.547 "name": "Existed_Raid", 00:11:41.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.547 "strip_size_kb": 0, 00:11:41.547 "state": "configuring", 00:11:41.547 "raid_level": "raid1", 00:11:41.547 "superblock": false, 00:11:41.547 "num_base_bdevs": 3, 00:11:41.547 "num_base_bdevs_discovered": 1, 00:11:41.547 "num_base_bdevs_operational": 3, 00:11:41.547 "base_bdevs_list": [ 00:11:41.547 { 00:11:41.547 "name": "BaseBdev1", 00:11:41.547 "uuid": "6eaa22f5-58e9-405c-91c3-b3a2b7bbab0c", 00:11:41.547 "is_configured": true, 00:11:41.547 "data_offset": 0, 00:11:41.547 "data_size": 65536 00:11:41.547 }, 00:11:41.547 { 00:11:41.547 "name": "BaseBdev2", 00:11:41.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.547 "is_configured": false, 00:11:41.547 "data_offset": 0, 00:11:41.547 "data_size": 0 00:11:41.547 }, 00:11:41.547 { 00:11:41.547 "name": "BaseBdev3", 00:11:41.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.547 "is_configured": false, 00:11:41.547 "data_offset": 0, 00:11:41.547 "data_size": 0 00:11:41.547 } 00:11:41.547 ] 00:11:41.547 }' 00:11:41.547 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.547 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.115 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.115 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.115 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.115 [2024-12-05 20:04:43.308610] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.115 [2024-12-05 20:04:43.308744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:42.115 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.115 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:42.115 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.115 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.115 [2024-12-05 20:04:43.316626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.115 [2024-12-05 20:04:43.318481] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.115 [2024-12-05 20:04:43.318578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.115 [2024-12-05 20:04:43.318597] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:42.115 [2024-12-05 20:04:43.318607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.116 "name": "Existed_Raid", 00:11:42.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.116 "strip_size_kb": 0, 00:11:42.116 "state": "configuring", 00:11:42.116 "raid_level": "raid1", 00:11:42.116 "superblock": false, 00:11:42.116 "num_base_bdevs": 3, 00:11:42.116 "num_base_bdevs_discovered": 1, 00:11:42.116 "num_base_bdevs_operational": 3, 00:11:42.116 "base_bdevs_list": [ 00:11:42.116 { 00:11:42.116 "name": "BaseBdev1", 00:11:42.116 "uuid": "6eaa22f5-58e9-405c-91c3-b3a2b7bbab0c", 00:11:42.116 "is_configured": true, 00:11:42.116 "data_offset": 0, 00:11:42.116 "data_size": 65536 00:11:42.116 }, 00:11:42.116 { 00:11:42.116 "name": "BaseBdev2", 00:11:42.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.116 "is_configured": false, 00:11:42.116 "data_offset": 0, 00:11:42.116 "data_size": 0 00:11:42.116 }, 00:11:42.116 { 00:11:42.116 "name": "BaseBdev3", 00:11:42.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.116 "is_configured": false, 00:11:42.116 "data_offset": 0, 00:11:42.116 "data_size": 0 00:11:42.116 } 00:11:42.116 ] 00:11:42.116 }' 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.116 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.378 [2024-12-05 20:04:43.759317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.378 BaseBdev2 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.378 [ 00:11:42.378 { 00:11:42.378 "name": "BaseBdev2", 00:11:42.378 "aliases": [ 00:11:42.378 "005294ea-00da-4f3a-931c-66f6dcc4cdd9" 00:11:42.378 ], 00:11:42.378 "product_name": "Malloc disk", 00:11:42.378 "block_size": 512, 00:11:42.378 "num_blocks": 65536, 00:11:42.378 "uuid": "005294ea-00da-4f3a-931c-66f6dcc4cdd9", 00:11:42.378 "assigned_rate_limits": { 00:11:42.378 "rw_ios_per_sec": 0, 00:11:42.378 "rw_mbytes_per_sec": 0, 00:11:42.378 "r_mbytes_per_sec": 0, 00:11:42.378 "w_mbytes_per_sec": 0 00:11:42.378 }, 00:11:42.378 "claimed": true, 00:11:42.378 "claim_type": "exclusive_write", 00:11:42.378 "zoned": false, 00:11:42.378 "supported_io_types": { 00:11:42.378 "read": true, 00:11:42.378 "write": true, 00:11:42.378 "unmap": true, 00:11:42.378 "flush": true, 00:11:42.378 "reset": true, 00:11:42.378 "nvme_admin": false, 00:11:42.378 "nvme_io": false, 00:11:42.378 "nvme_io_md": false, 00:11:42.378 "write_zeroes": true, 00:11:42.378 "zcopy": true, 00:11:42.378 "get_zone_info": false, 00:11:42.378 "zone_management": false, 00:11:42.378 "zone_append": false, 00:11:42.378 "compare": false, 00:11:42.378 "compare_and_write": false, 00:11:42.378 "abort": true, 00:11:42.378 "seek_hole": false, 00:11:42.378 "seek_data": false, 00:11:42.378 "copy": true, 00:11:42.378 "nvme_iov_md": false 00:11:42.378 }, 00:11:42.378 "memory_domains": [ 00:11:42.378 { 00:11:42.378 "dma_device_id": "system", 00:11:42.378 "dma_device_type": 1 00:11:42.378 }, 00:11:42.378 { 00:11:42.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.378 "dma_device_type": 2 00:11:42.378 } 00:11:42.378 ], 00:11:42.378 "driver_specific": {} 00:11:42.378 } 00:11:42.378 ] 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:42.378 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:42.379 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:42.379 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:42.379 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.379 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.379 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.379 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.379 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.379 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.379 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.379 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.379 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.379 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.379 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.379 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.379 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.637 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.637 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.637 "name": "Existed_Raid", 00:11:42.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.637 "strip_size_kb": 0, 00:11:42.637 "state": "configuring", 00:11:42.637 "raid_level": "raid1", 00:11:42.637 "superblock": false, 00:11:42.637 "num_base_bdevs": 3, 00:11:42.637 "num_base_bdevs_discovered": 2, 00:11:42.637 "num_base_bdevs_operational": 3, 00:11:42.637 "base_bdevs_list": [ 00:11:42.637 { 00:11:42.637 "name": "BaseBdev1", 00:11:42.637 "uuid": "6eaa22f5-58e9-405c-91c3-b3a2b7bbab0c", 00:11:42.638 "is_configured": true, 00:11:42.638 "data_offset": 0, 00:11:42.638 "data_size": 65536 00:11:42.638 }, 00:11:42.638 { 00:11:42.638 "name": "BaseBdev2", 00:11:42.638 "uuid": "005294ea-00da-4f3a-931c-66f6dcc4cdd9", 00:11:42.638 "is_configured": true, 00:11:42.638 "data_offset": 0, 00:11:42.638 "data_size": 65536 00:11:42.638 }, 00:11:42.638 { 00:11:42.638 "name": "BaseBdev3", 00:11:42.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.638 "is_configured": false, 00:11:42.638 "data_offset": 0, 00:11:42.638 "data_size": 0 00:11:42.638 } 00:11:42.638 ] 00:11:42.638 }' 00:11:42.638 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.638 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.897 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:42.897 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.897 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.898 [2024-12-05 20:04:44.316468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.898 [2024-12-05 20:04:44.316586] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:42.898 [2024-12-05 20:04:44.316661] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:42.898 [2024-12-05 20:04:44.317037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:42.898 [2024-12-05 20:04:44.317313] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:42.898 [2024-12-05 20:04:44.317334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:42.898 [2024-12-05 20:04:44.317631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.898 BaseBdev3 00:11:42.898 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.898 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:42.898 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:42.898 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:42.898 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:42.898 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:42.898 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:42.898 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:42.898 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.898 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.898 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.898 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:42.898 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.898 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.157 [ 00:11:43.157 { 00:11:43.157 "name": "BaseBdev3", 00:11:43.157 "aliases": [ 00:11:43.157 "9bd87cd4-4936-4e0e-bc30-b40a8e9dd32e" 00:11:43.157 ], 00:11:43.157 "product_name": "Malloc disk", 00:11:43.157 "block_size": 512, 00:11:43.157 "num_blocks": 65536, 00:11:43.157 "uuid": "9bd87cd4-4936-4e0e-bc30-b40a8e9dd32e", 00:11:43.157 "assigned_rate_limits": { 00:11:43.157 "rw_ios_per_sec": 0, 00:11:43.157 "rw_mbytes_per_sec": 0, 00:11:43.157 "r_mbytes_per_sec": 0, 00:11:43.157 "w_mbytes_per_sec": 0 00:11:43.157 }, 00:11:43.157 "claimed": true, 00:11:43.157 "claim_type": "exclusive_write", 00:11:43.157 "zoned": false, 00:11:43.157 "supported_io_types": { 00:11:43.157 "read": true, 00:11:43.157 "write": true, 00:11:43.157 "unmap": true, 00:11:43.157 "flush": true, 00:11:43.158 "reset": true, 00:11:43.158 "nvme_admin": false, 00:11:43.158 "nvme_io": false, 00:11:43.158 "nvme_io_md": false, 00:11:43.158 "write_zeroes": true, 00:11:43.158 "zcopy": true, 00:11:43.158 "get_zone_info": false, 00:11:43.158 "zone_management": false, 00:11:43.158 "zone_append": false, 00:11:43.158 "compare": false, 00:11:43.158 "compare_and_write": false, 00:11:43.158 "abort": true, 00:11:43.158 "seek_hole": false, 00:11:43.158 "seek_data": false, 00:11:43.158 "copy": true, 00:11:43.158 "nvme_iov_md": false 00:11:43.158 }, 00:11:43.158 "memory_domains": [ 00:11:43.158 { 00:11:43.158 "dma_device_id": "system", 00:11:43.158 "dma_device_type": 1 00:11:43.158 }, 00:11:43.158 { 00:11:43.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.158 "dma_device_type": 2 00:11:43.158 } 00:11:43.158 ], 00:11:43.158 "driver_specific": {} 00:11:43.158 } 00:11:43.158 ] 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.158 "name": "Existed_Raid", 00:11:43.158 "uuid": "dae8e48b-3ea5-4c5c-aec6-b54fc90c44f6", 00:11:43.158 "strip_size_kb": 0, 00:11:43.158 "state": "online", 00:11:43.158 "raid_level": "raid1", 00:11:43.158 "superblock": false, 00:11:43.158 "num_base_bdevs": 3, 00:11:43.158 "num_base_bdevs_discovered": 3, 00:11:43.158 "num_base_bdevs_operational": 3, 00:11:43.158 "base_bdevs_list": [ 00:11:43.158 { 00:11:43.158 "name": "BaseBdev1", 00:11:43.158 "uuid": "6eaa22f5-58e9-405c-91c3-b3a2b7bbab0c", 00:11:43.158 "is_configured": true, 00:11:43.158 "data_offset": 0, 00:11:43.158 "data_size": 65536 00:11:43.158 }, 00:11:43.158 { 00:11:43.158 "name": "BaseBdev2", 00:11:43.158 "uuid": "005294ea-00da-4f3a-931c-66f6dcc4cdd9", 00:11:43.158 "is_configured": true, 00:11:43.158 "data_offset": 0, 00:11:43.158 "data_size": 65536 00:11:43.158 }, 00:11:43.158 { 00:11:43.158 "name": "BaseBdev3", 00:11:43.158 "uuid": "9bd87cd4-4936-4e0e-bc30-b40a8e9dd32e", 00:11:43.158 "is_configured": true, 00:11:43.158 "data_offset": 0, 00:11:43.158 "data_size": 65536 00:11:43.158 } 00:11:43.158 ] 00:11:43.158 }' 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.158 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.417 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:43.417 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:43.417 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:43.417 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:43.417 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:43.417 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:43.417 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:43.417 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.417 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.417 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:43.417 [2024-12-05 20:04:44.820005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.417 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.677 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:43.677 "name": "Existed_Raid", 00:11:43.677 "aliases": [ 00:11:43.677 "dae8e48b-3ea5-4c5c-aec6-b54fc90c44f6" 00:11:43.677 ], 00:11:43.677 "product_name": "Raid Volume", 00:11:43.677 "block_size": 512, 00:11:43.677 "num_blocks": 65536, 00:11:43.677 "uuid": "dae8e48b-3ea5-4c5c-aec6-b54fc90c44f6", 00:11:43.677 "assigned_rate_limits": { 00:11:43.677 "rw_ios_per_sec": 0, 00:11:43.677 "rw_mbytes_per_sec": 0, 00:11:43.677 "r_mbytes_per_sec": 0, 00:11:43.677 "w_mbytes_per_sec": 0 00:11:43.677 }, 00:11:43.677 "claimed": false, 00:11:43.677 "zoned": false, 00:11:43.677 "supported_io_types": { 00:11:43.677 "read": true, 00:11:43.677 "write": true, 00:11:43.677 "unmap": false, 00:11:43.677 "flush": false, 00:11:43.677 "reset": true, 00:11:43.677 "nvme_admin": false, 00:11:43.677 "nvme_io": false, 00:11:43.677 "nvme_io_md": false, 00:11:43.677 "write_zeroes": true, 00:11:43.677 "zcopy": false, 00:11:43.677 "get_zone_info": false, 00:11:43.677 "zone_management": false, 00:11:43.677 "zone_append": false, 00:11:43.677 "compare": false, 00:11:43.677 "compare_and_write": false, 00:11:43.677 "abort": false, 00:11:43.677 "seek_hole": false, 00:11:43.677 "seek_data": false, 00:11:43.677 "copy": false, 00:11:43.677 "nvme_iov_md": false 00:11:43.677 }, 00:11:43.677 "memory_domains": [ 00:11:43.677 { 00:11:43.677 "dma_device_id": "system", 00:11:43.677 "dma_device_type": 1 00:11:43.677 }, 00:11:43.677 { 00:11:43.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.677 "dma_device_type": 2 00:11:43.677 }, 00:11:43.677 { 00:11:43.677 "dma_device_id": "system", 00:11:43.677 "dma_device_type": 1 00:11:43.677 }, 00:11:43.677 { 00:11:43.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.677 "dma_device_type": 2 00:11:43.677 }, 00:11:43.677 { 00:11:43.677 "dma_device_id": "system", 00:11:43.677 "dma_device_type": 1 00:11:43.677 }, 00:11:43.677 { 00:11:43.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.677 "dma_device_type": 2 00:11:43.677 } 00:11:43.677 ], 00:11:43.677 "driver_specific": { 00:11:43.677 "raid": { 00:11:43.677 "uuid": "dae8e48b-3ea5-4c5c-aec6-b54fc90c44f6", 00:11:43.677 "strip_size_kb": 0, 00:11:43.677 "state": "online", 00:11:43.677 "raid_level": "raid1", 00:11:43.677 "superblock": false, 00:11:43.677 "num_base_bdevs": 3, 00:11:43.677 "num_base_bdevs_discovered": 3, 00:11:43.677 "num_base_bdevs_operational": 3, 00:11:43.677 "base_bdevs_list": [ 00:11:43.677 { 00:11:43.677 "name": "BaseBdev1", 00:11:43.677 "uuid": "6eaa22f5-58e9-405c-91c3-b3a2b7bbab0c", 00:11:43.677 "is_configured": true, 00:11:43.677 "data_offset": 0, 00:11:43.677 "data_size": 65536 00:11:43.677 }, 00:11:43.677 { 00:11:43.677 "name": "BaseBdev2", 00:11:43.677 "uuid": "005294ea-00da-4f3a-931c-66f6dcc4cdd9", 00:11:43.677 "is_configured": true, 00:11:43.677 "data_offset": 0, 00:11:43.677 "data_size": 65536 00:11:43.677 }, 00:11:43.677 { 00:11:43.677 "name": "BaseBdev3", 00:11:43.677 "uuid": "9bd87cd4-4936-4e0e-bc30-b40a8e9dd32e", 00:11:43.677 "is_configured": true, 00:11:43.677 "data_offset": 0, 00:11:43.677 "data_size": 65536 00:11:43.677 } 00:11:43.677 ] 00:11:43.677 } 00:11:43.677 } 00:11:43.677 }' 00:11:43.677 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:43.677 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:43.677 BaseBdev2 00:11:43.677 BaseBdev3' 00:11:43.677 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.677 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:43.677 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.677 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:43.677 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.677 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.677 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.677 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.677 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.678 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.678 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.678 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:43.678 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.678 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.678 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.678 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.678 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.678 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.678 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.678 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:43.678 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.678 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.678 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.678 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.678 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.678 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.678 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:43.678 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.678 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.678 [2024-12-05 20:04:45.103233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.937 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.937 "name": "Existed_Raid", 00:11:43.937 "uuid": "dae8e48b-3ea5-4c5c-aec6-b54fc90c44f6", 00:11:43.937 "strip_size_kb": 0, 00:11:43.937 "state": "online", 00:11:43.937 "raid_level": "raid1", 00:11:43.937 "superblock": false, 00:11:43.937 "num_base_bdevs": 3, 00:11:43.937 "num_base_bdevs_discovered": 2, 00:11:43.937 "num_base_bdevs_operational": 2, 00:11:43.937 "base_bdevs_list": [ 00:11:43.937 { 00:11:43.937 "name": null, 00:11:43.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.937 "is_configured": false, 00:11:43.937 "data_offset": 0, 00:11:43.937 "data_size": 65536 00:11:43.937 }, 00:11:43.937 { 00:11:43.938 "name": "BaseBdev2", 00:11:43.938 "uuid": "005294ea-00da-4f3a-931c-66f6dcc4cdd9", 00:11:43.938 "is_configured": true, 00:11:43.938 "data_offset": 0, 00:11:43.938 "data_size": 65536 00:11:43.938 }, 00:11:43.938 { 00:11:43.938 "name": "BaseBdev3", 00:11:43.938 "uuid": "9bd87cd4-4936-4e0e-bc30-b40a8e9dd32e", 00:11:43.938 "is_configured": true, 00:11:43.938 "data_offset": 0, 00:11:43.938 "data_size": 65536 00:11:43.938 } 00:11:43.938 ] 00:11:43.938 }' 00:11:43.938 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.938 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.197 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:44.197 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.197 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:44.197 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.198 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.198 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.456 [2024-12-05 20:04:45.669557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.456 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.456 [2024-12-05 20:04:45.827480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:44.456 [2024-12-05 20:04:45.827580] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.715 [2024-12-05 20:04:45.922499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.715 [2024-12-05 20:04:45.922557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.715 [2024-12-05 20:04:45.922569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:44.715 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.715 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:44.715 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.715 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.715 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:44.715 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.715 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.715 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.715 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:44.715 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:44.715 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:44.715 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:44.715 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.715 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:44.715 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.715 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.715 BaseBdev2 00:11:44.715 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.715 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:44.715 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:44.715 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:44.715 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:44.715 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:44.715 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:44.715 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:44.715 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.715 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.715 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.715 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:44.715 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.715 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.715 [ 00:11:44.715 { 00:11:44.715 "name": "BaseBdev2", 00:11:44.715 "aliases": [ 00:11:44.715 "6d8238c6-2af6-42b6-a1a7-e991db6f2cc8" 00:11:44.715 ], 00:11:44.715 "product_name": "Malloc disk", 00:11:44.715 "block_size": 512, 00:11:44.715 "num_blocks": 65536, 00:11:44.715 "uuid": "6d8238c6-2af6-42b6-a1a7-e991db6f2cc8", 00:11:44.715 "assigned_rate_limits": { 00:11:44.715 "rw_ios_per_sec": 0, 00:11:44.715 "rw_mbytes_per_sec": 0, 00:11:44.715 "r_mbytes_per_sec": 0, 00:11:44.715 "w_mbytes_per_sec": 0 00:11:44.715 }, 00:11:44.715 "claimed": false, 00:11:44.715 "zoned": false, 00:11:44.715 "supported_io_types": { 00:11:44.715 "read": true, 00:11:44.715 "write": true, 00:11:44.715 "unmap": true, 00:11:44.715 "flush": true, 00:11:44.715 "reset": true, 00:11:44.715 "nvme_admin": false, 00:11:44.716 "nvme_io": false, 00:11:44.716 "nvme_io_md": false, 00:11:44.716 "write_zeroes": true, 00:11:44.716 "zcopy": true, 00:11:44.716 "get_zone_info": false, 00:11:44.716 "zone_management": false, 00:11:44.716 "zone_append": false, 00:11:44.716 "compare": false, 00:11:44.716 "compare_and_write": false, 00:11:44.716 "abort": true, 00:11:44.716 "seek_hole": false, 00:11:44.716 "seek_data": false, 00:11:44.716 "copy": true, 00:11:44.716 "nvme_iov_md": false 00:11:44.716 }, 00:11:44.716 "memory_domains": [ 00:11:44.716 { 00:11:44.716 "dma_device_id": "system", 00:11:44.716 "dma_device_type": 1 00:11:44.716 }, 00:11:44.716 { 00:11:44.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.716 "dma_device_type": 2 00:11:44.716 } 00:11:44.716 ], 00:11:44.716 "driver_specific": {} 00:11:44.716 } 00:11:44.716 ] 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.716 BaseBdev3 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.716 [ 00:11:44.716 { 00:11:44.716 "name": "BaseBdev3", 00:11:44.716 "aliases": [ 00:11:44.716 "f80dab4e-686f-4e8a-b304-3a336d5a72c4" 00:11:44.716 ], 00:11:44.716 "product_name": "Malloc disk", 00:11:44.716 "block_size": 512, 00:11:44.716 "num_blocks": 65536, 00:11:44.716 "uuid": "f80dab4e-686f-4e8a-b304-3a336d5a72c4", 00:11:44.716 "assigned_rate_limits": { 00:11:44.716 "rw_ios_per_sec": 0, 00:11:44.716 "rw_mbytes_per_sec": 0, 00:11:44.716 "r_mbytes_per_sec": 0, 00:11:44.716 "w_mbytes_per_sec": 0 00:11:44.716 }, 00:11:44.716 "claimed": false, 00:11:44.716 "zoned": false, 00:11:44.716 "supported_io_types": { 00:11:44.716 "read": true, 00:11:44.716 "write": true, 00:11:44.716 "unmap": true, 00:11:44.716 "flush": true, 00:11:44.716 "reset": true, 00:11:44.716 "nvme_admin": false, 00:11:44.716 "nvme_io": false, 00:11:44.716 "nvme_io_md": false, 00:11:44.716 "write_zeroes": true, 00:11:44.716 "zcopy": true, 00:11:44.716 "get_zone_info": false, 00:11:44.716 "zone_management": false, 00:11:44.716 "zone_append": false, 00:11:44.716 "compare": false, 00:11:44.716 "compare_and_write": false, 00:11:44.716 "abort": true, 00:11:44.716 "seek_hole": false, 00:11:44.716 "seek_data": false, 00:11:44.716 "copy": true, 00:11:44.716 "nvme_iov_md": false 00:11:44.716 }, 00:11:44.716 "memory_domains": [ 00:11:44.716 { 00:11:44.716 "dma_device_id": "system", 00:11:44.716 "dma_device_type": 1 00:11:44.716 }, 00:11:44.716 { 00:11:44.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.716 "dma_device_type": 2 00:11:44.716 } 00:11:44.716 ], 00:11:44.716 "driver_specific": {} 00:11:44.716 } 00:11:44.716 ] 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.716 [2024-12-05 20:04:46.142425] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:44.716 [2024-12-05 20:04:46.142558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:44.716 [2024-12-05 20:04:46.142624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.716 [2024-12-05 20:04:46.144638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.716 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.975 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.975 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.975 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.975 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.975 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.975 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.975 "name": "Existed_Raid", 00:11:44.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.975 "strip_size_kb": 0, 00:11:44.975 "state": "configuring", 00:11:44.975 "raid_level": "raid1", 00:11:44.975 "superblock": false, 00:11:44.975 "num_base_bdevs": 3, 00:11:44.975 "num_base_bdevs_discovered": 2, 00:11:44.975 "num_base_bdevs_operational": 3, 00:11:44.975 "base_bdevs_list": [ 00:11:44.975 { 00:11:44.975 "name": "BaseBdev1", 00:11:44.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.975 "is_configured": false, 00:11:44.975 "data_offset": 0, 00:11:44.975 "data_size": 0 00:11:44.975 }, 00:11:44.975 { 00:11:44.975 "name": "BaseBdev2", 00:11:44.975 "uuid": "6d8238c6-2af6-42b6-a1a7-e991db6f2cc8", 00:11:44.975 "is_configured": true, 00:11:44.975 "data_offset": 0, 00:11:44.975 "data_size": 65536 00:11:44.975 }, 00:11:44.975 { 00:11:44.975 "name": "BaseBdev3", 00:11:44.975 "uuid": "f80dab4e-686f-4e8a-b304-3a336d5a72c4", 00:11:44.975 "is_configured": true, 00:11:44.975 "data_offset": 0, 00:11:44.975 "data_size": 65536 00:11:44.975 } 00:11:44.975 ] 00:11:44.975 }' 00:11:44.975 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.975 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.235 [2024-12-05 20:04:46.605668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.235 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.495 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.495 "name": "Existed_Raid", 00:11:45.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.495 "strip_size_kb": 0, 00:11:45.495 "state": "configuring", 00:11:45.495 "raid_level": "raid1", 00:11:45.495 "superblock": false, 00:11:45.495 "num_base_bdevs": 3, 00:11:45.495 "num_base_bdevs_discovered": 1, 00:11:45.495 "num_base_bdevs_operational": 3, 00:11:45.495 "base_bdevs_list": [ 00:11:45.495 { 00:11:45.495 "name": "BaseBdev1", 00:11:45.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.495 "is_configured": false, 00:11:45.495 "data_offset": 0, 00:11:45.495 "data_size": 0 00:11:45.495 }, 00:11:45.495 { 00:11:45.495 "name": null, 00:11:45.495 "uuid": "6d8238c6-2af6-42b6-a1a7-e991db6f2cc8", 00:11:45.495 "is_configured": false, 00:11:45.495 "data_offset": 0, 00:11:45.495 "data_size": 65536 00:11:45.495 }, 00:11:45.495 { 00:11:45.495 "name": "BaseBdev3", 00:11:45.495 "uuid": "f80dab4e-686f-4e8a-b304-3a336d5a72c4", 00:11:45.495 "is_configured": true, 00:11:45.495 "data_offset": 0, 00:11:45.495 "data_size": 65536 00:11:45.495 } 00:11:45.495 ] 00:11:45.495 }' 00:11:45.495 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.495 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.755 [2024-12-05 20:04:47.166125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.755 BaseBdev1 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.755 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.015 [ 00:11:46.015 { 00:11:46.015 "name": "BaseBdev1", 00:11:46.015 "aliases": [ 00:11:46.015 "88d62721-5014-4a7c-be00-cdeaaac55191" 00:11:46.015 ], 00:11:46.015 "product_name": "Malloc disk", 00:11:46.015 "block_size": 512, 00:11:46.015 "num_blocks": 65536, 00:11:46.015 "uuid": "88d62721-5014-4a7c-be00-cdeaaac55191", 00:11:46.015 "assigned_rate_limits": { 00:11:46.015 "rw_ios_per_sec": 0, 00:11:46.015 "rw_mbytes_per_sec": 0, 00:11:46.015 "r_mbytes_per_sec": 0, 00:11:46.015 "w_mbytes_per_sec": 0 00:11:46.015 }, 00:11:46.015 "claimed": true, 00:11:46.015 "claim_type": "exclusive_write", 00:11:46.015 "zoned": false, 00:11:46.015 "supported_io_types": { 00:11:46.015 "read": true, 00:11:46.015 "write": true, 00:11:46.015 "unmap": true, 00:11:46.015 "flush": true, 00:11:46.015 "reset": true, 00:11:46.015 "nvme_admin": false, 00:11:46.015 "nvme_io": false, 00:11:46.015 "nvme_io_md": false, 00:11:46.015 "write_zeroes": true, 00:11:46.015 "zcopy": true, 00:11:46.015 "get_zone_info": false, 00:11:46.015 "zone_management": false, 00:11:46.015 "zone_append": false, 00:11:46.015 "compare": false, 00:11:46.015 "compare_and_write": false, 00:11:46.015 "abort": true, 00:11:46.015 "seek_hole": false, 00:11:46.015 "seek_data": false, 00:11:46.015 "copy": true, 00:11:46.015 "nvme_iov_md": false 00:11:46.015 }, 00:11:46.015 "memory_domains": [ 00:11:46.015 { 00:11:46.015 "dma_device_id": "system", 00:11:46.015 "dma_device_type": 1 00:11:46.015 }, 00:11:46.015 { 00:11:46.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.015 "dma_device_type": 2 00:11:46.015 } 00:11:46.015 ], 00:11:46.015 "driver_specific": {} 00:11:46.015 } 00:11:46.015 ] 00:11:46.015 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.015 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:46.015 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:46.015 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.015 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.015 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.016 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.016 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.016 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.016 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.016 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.016 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.016 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.016 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.016 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.016 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.016 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.016 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.016 "name": "Existed_Raid", 00:11:46.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.016 "strip_size_kb": 0, 00:11:46.016 "state": "configuring", 00:11:46.016 "raid_level": "raid1", 00:11:46.016 "superblock": false, 00:11:46.016 "num_base_bdevs": 3, 00:11:46.016 "num_base_bdevs_discovered": 2, 00:11:46.016 "num_base_bdevs_operational": 3, 00:11:46.016 "base_bdevs_list": [ 00:11:46.016 { 00:11:46.016 "name": "BaseBdev1", 00:11:46.016 "uuid": "88d62721-5014-4a7c-be00-cdeaaac55191", 00:11:46.016 "is_configured": true, 00:11:46.016 "data_offset": 0, 00:11:46.016 "data_size": 65536 00:11:46.016 }, 00:11:46.016 { 00:11:46.016 "name": null, 00:11:46.016 "uuid": "6d8238c6-2af6-42b6-a1a7-e991db6f2cc8", 00:11:46.016 "is_configured": false, 00:11:46.016 "data_offset": 0, 00:11:46.016 "data_size": 65536 00:11:46.016 }, 00:11:46.016 { 00:11:46.016 "name": "BaseBdev3", 00:11:46.016 "uuid": "f80dab4e-686f-4e8a-b304-3a336d5a72c4", 00:11:46.016 "is_configured": true, 00:11:46.016 "data_offset": 0, 00:11:46.016 "data_size": 65536 00:11:46.016 } 00:11:46.016 ] 00:11:46.016 }' 00:11:46.016 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.016 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.275 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:46.275 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.275 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.275 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.275 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.275 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:46.275 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:46.275 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.275 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.275 [2024-12-05 20:04:47.709321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.534 "name": "Existed_Raid", 00:11:46.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.534 "strip_size_kb": 0, 00:11:46.534 "state": "configuring", 00:11:46.534 "raid_level": "raid1", 00:11:46.534 "superblock": false, 00:11:46.534 "num_base_bdevs": 3, 00:11:46.534 "num_base_bdevs_discovered": 1, 00:11:46.534 "num_base_bdevs_operational": 3, 00:11:46.534 "base_bdevs_list": [ 00:11:46.534 { 00:11:46.534 "name": "BaseBdev1", 00:11:46.534 "uuid": "88d62721-5014-4a7c-be00-cdeaaac55191", 00:11:46.534 "is_configured": true, 00:11:46.534 "data_offset": 0, 00:11:46.534 "data_size": 65536 00:11:46.534 }, 00:11:46.534 { 00:11:46.534 "name": null, 00:11:46.534 "uuid": "6d8238c6-2af6-42b6-a1a7-e991db6f2cc8", 00:11:46.534 "is_configured": false, 00:11:46.534 "data_offset": 0, 00:11:46.534 "data_size": 65536 00:11:46.534 }, 00:11:46.534 { 00:11:46.534 "name": null, 00:11:46.534 "uuid": "f80dab4e-686f-4e8a-b304-3a336d5a72c4", 00:11:46.534 "is_configured": false, 00:11:46.534 "data_offset": 0, 00:11:46.534 "data_size": 65536 00:11:46.534 } 00:11:46.534 ] 00:11:46.534 }' 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.534 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.793 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.793 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:46.793 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.793 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.052 [2024-12-05 20:04:48.268415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.052 "name": "Existed_Raid", 00:11:47.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.052 "strip_size_kb": 0, 00:11:47.052 "state": "configuring", 00:11:47.052 "raid_level": "raid1", 00:11:47.052 "superblock": false, 00:11:47.052 "num_base_bdevs": 3, 00:11:47.052 "num_base_bdevs_discovered": 2, 00:11:47.052 "num_base_bdevs_operational": 3, 00:11:47.052 "base_bdevs_list": [ 00:11:47.052 { 00:11:47.052 "name": "BaseBdev1", 00:11:47.052 "uuid": "88d62721-5014-4a7c-be00-cdeaaac55191", 00:11:47.052 "is_configured": true, 00:11:47.052 "data_offset": 0, 00:11:47.052 "data_size": 65536 00:11:47.052 }, 00:11:47.052 { 00:11:47.052 "name": null, 00:11:47.052 "uuid": "6d8238c6-2af6-42b6-a1a7-e991db6f2cc8", 00:11:47.052 "is_configured": false, 00:11:47.052 "data_offset": 0, 00:11:47.052 "data_size": 65536 00:11:47.052 }, 00:11:47.052 { 00:11:47.052 "name": "BaseBdev3", 00:11:47.052 "uuid": "f80dab4e-686f-4e8a-b304-3a336d5a72c4", 00:11:47.052 "is_configured": true, 00:11:47.052 "data_offset": 0, 00:11:47.052 "data_size": 65536 00:11:47.052 } 00:11:47.052 ] 00:11:47.052 }' 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.052 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.312 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:47.312 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.312 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.312 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.571 [2024-12-05 20:04:48.775633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.571 "name": "Existed_Raid", 00:11:47.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.571 "strip_size_kb": 0, 00:11:47.571 "state": "configuring", 00:11:47.571 "raid_level": "raid1", 00:11:47.571 "superblock": false, 00:11:47.571 "num_base_bdevs": 3, 00:11:47.571 "num_base_bdevs_discovered": 1, 00:11:47.571 "num_base_bdevs_operational": 3, 00:11:47.571 "base_bdevs_list": [ 00:11:47.571 { 00:11:47.571 "name": null, 00:11:47.571 "uuid": "88d62721-5014-4a7c-be00-cdeaaac55191", 00:11:47.571 "is_configured": false, 00:11:47.571 "data_offset": 0, 00:11:47.571 "data_size": 65536 00:11:47.571 }, 00:11:47.571 { 00:11:47.571 "name": null, 00:11:47.571 "uuid": "6d8238c6-2af6-42b6-a1a7-e991db6f2cc8", 00:11:47.571 "is_configured": false, 00:11:47.571 "data_offset": 0, 00:11:47.571 "data_size": 65536 00:11:47.571 }, 00:11:47.571 { 00:11:47.571 "name": "BaseBdev3", 00:11:47.571 "uuid": "f80dab4e-686f-4e8a-b304-3a336d5a72c4", 00:11:47.571 "is_configured": true, 00:11:47.571 "data_offset": 0, 00:11:47.571 "data_size": 65536 00:11:47.571 } 00:11:47.571 ] 00:11:47.571 }' 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.571 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.138 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.138 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:48.138 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.138 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.138 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.138 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:48.138 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:48.138 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.138 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.138 [2024-12-05 20:04:49.353324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:48.138 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.138 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:48.139 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.139 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.139 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.139 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.139 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.139 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.139 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.139 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.139 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.139 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.139 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.139 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.139 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.139 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.139 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.139 "name": "Existed_Raid", 00:11:48.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.139 "strip_size_kb": 0, 00:11:48.139 "state": "configuring", 00:11:48.139 "raid_level": "raid1", 00:11:48.139 "superblock": false, 00:11:48.139 "num_base_bdevs": 3, 00:11:48.139 "num_base_bdevs_discovered": 2, 00:11:48.139 "num_base_bdevs_operational": 3, 00:11:48.139 "base_bdevs_list": [ 00:11:48.139 { 00:11:48.139 "name": null, 00:11:48.139 "uuid": "88d62721-5014-4a7c-be00-cdeaaac55191", 00:11:48.139 "is_configured": false, 00:11:48.139 "data_offset": 0, 00:11:48.139 "data_size": 65536 00:11:48.139 }, 00:11:48.139 { 00:11:48.139 "name": "BaseBdev2", 00:11:48.139 "uuid": "6d8238c6-2af6-42b6-a1a7-e991db6f2cc8", 00:11:48.139 "is_configured": true, 00:11:48.139 "data_offset": 0, 00:11:48.139 "data_size": 65536 00:11:48.139 }, 00:11:48.139 { 00:11:48.139 "name": "BaseBdev3", 00:11:48.139 "uuid": "f80dab4e-686f-4e8a-b304-3a336d5a72c4", 00:11:48.139 "is_configured": true, 00:11:48.139 "data_offset": 0, 00:11:48.139 "data_size": 65536 00:11:48.139 } 00:11:48.139 ] 00:11:48.139 }' 00:11:48.139 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.139 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.398 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:48.398 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.398 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.398 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.398 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.398 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:48.398 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.398 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:48.398 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.398 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 88d62721-5014-4a7c-be00-cdeaaac55191 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.658 [2024-12-05 20:04:49.914049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:48.658 [2024-12-05 20:04:49.914194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:48.658 [2024-12-05 20:04:49.914248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:48.658 [2024-12-05 20:04:49.914616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:48.658 [2024-12-05 20:04:49.914865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:48.658 [2024-12-05 20:04:49.914885] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:48.658 [2024-12-05 20:04:49.915159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.658 NewBaseBdev 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.658 [ 00:11:48.658 { 00:11:48.658 "name": "NewBaseBdev", 00:11:48.658 "aliases": [ 00:11:48.658 "88d62721-5014-4a7c-be00-cdeaaac55191" 00:11:48.658 ], 00:11:48.658 "product_name": "Malloc disk", 00:11:48.658 "block_size": 512, 00:11:48.658 "num_blocks": 65536, 00:11:48.658 "uuid": "88d62721-5014-4a7c-be00-cdeaaac55191", 00:11:48.658 "assigned_rate_limits": { 00:11:48.658 "rw_ios_per_sec": 0, 00:11:48.658 "rw_mbytes_per_sec": 0, 00:11:48.658 "r_mbytes_per_sec": 0, 00:11:48.658 "w_mbytes_per_sec": 0 00:11:48.658 }, 00:11:48.658 "claimed": true, 00:11:48.658 "claim_type": "exclusive_write", 00:11:48.658 "zoned": false, 00:11:48.658 "supported_io_types": { 00:11:48.658 "read": true, 00:11:48.658 "write": true, 00:11:48.658 "unmap": true, 00:11:48.658 "flush": true, 00:11:48.658 "reset": true, 00:11:48.658 "nvme_admin": false, 00:11:48.658 "nvme_io": false, 00:11:48.658 "nvme_io_md": false, 00:11:48.658 "write_zeroes": true, 00:11:48.658 "zcopy": true, 00:11:48.658 "get_zone_info": false, 00:11:48.658 "zone_management": false, 00:11:48.658 "zone_append": false, 00:11:48.658 "compare": false, 00:11:48.658 "compare_and_write": false, 00:11:48.658 "abort": true, 00:11:48.658 "seek_hole": false, 00:11:48.658 "seek_data": false, 00:11:48.658 "copy": true, 00:11:48.658 "nvme_iov_md": false 00:11:48.658 }, 00:11:48.658 "memory_domains": [ 00:11:48.658 { 00:11:48.658 "dma_device_id": "system", 00:11:48.658 "dma_device_type": 1 00:11:48.658 }, 00:11:48.658 { 00:11:48.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.658 "dma_device_type": 2 00:11:48.658 } 00:11:48.658 ], 00:11:48.658 "driver_specific": {} 00:11:48.658 } 00:11:48.658 ] 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.658 "name": "Existed_Raid", 00:11:48.658 "uuid": "06188a01-7950-4413-9f8e-3612f66d02ad", 00:11:48.658 "strip_size_kb": 0, 00:11:48.658 "state": "online", 00:11:48.658 "raid_level": "raid1", 00:11:48.658 "superblock": false, 00:11:48.658 "num_base_bdevs": 3, 00:11:48.658 "num_base_bdevs_discovered": 3, 00:11:48.658 "num_base_bdevs_operational": 3, 00:11:48.658 "base_bdevs_list": [ 00:11:48.658 { 00:11:48.658 "name": "NewBaseBdev", 00:11:48.658 "uuid": "88d62721-5014-4a7c-be00-cdeaaac55191", 00:11:48.658 "is_configured": true, 00:11:48.658 "data_offset": 0, 00:11:48.658 "data_size": 65536 00:11:48.658 }, 00:11:48.658 { 00:11:48.658 "name": "BaseBdev2", 00:11:48.658 "uuid": "6d8238c6-2af6-42b6-a1a7-e991db6f2cc8", 00:11:48.658 "is_configured": true, 00:11:48.658 "data_offset": 0, 00:11:48.658 "data_size": 65536 00:11:48.658 }, 00:11:48.658 { 00:11:48.658 "name": "BaseBdev3", 00:11:48.658 "uuid": "f80dab4e-686f-4e8a-b304-3a336d5a72c4", 00:11:48.658 "is_configured": true, 00:11:48.658 "data_offset": 0, 00:11:48.658 "data_size": 65536 00:11:48.658 } 00:11:48.658 ] 00:11:48.658 }' 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.658 20:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.231 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:49.231 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:49.231 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:49.231 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:49.231 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:49.231 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:49.231 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:49.232 [2024-12-05 20:04:50.373683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:49.232 "name": "Existed_Raid", 00:11:49.232 "aliases": [ 00:11:49.232 "06188a01-7950-4413-9f8e-3612f66d02ad" 00:11:49.232 ], 00:11:49.232 "product_name": "Raid Volume", 00:11:49.232 "block_size": 512, 00:11:49.232 "num_blocks": 65536, 00:11:49.232 "uuid": "06188a01-7950-4413-9f8e-3612f66d02ad", 00:11:49.232 "assigned_rate_limits": { 00:11:49.232 "rw_ios_per_sec": 0, 00:11:49.232 "rw_mbytes_per_sec": 0, 00:11:49.232 "r_mbytes_per_sec": 0, 00:11:49.232 "w_mbytes_per_sec": 0 00:11:49.232 }, 00:11:49.232 "claimed": false, 00:11:49.232 "zoned": false, 00:11:49.232 "supported_io_types": { 00:11:49.232 "read": true, 00:11:49.232 "write": true, 00:11:49.232 "unmap": false, 00:11:49.232 "flush": false, 00:11:49.232 "reset": true, 00:11:49.232 "nvme_admin": false, 00:11:49.232 "nvme_io": false, 00:11:49.232 "nvme_io_md": false, 00:11:49.232 "write_zeroes": true, 00:11:49.232 "zcopy": false, 00:11:49.232 "get_zone_info": false, 00:11:49.232 "zone_management": false, 00:11:49.232 "zone_append": false, 00:11:49.232 "compare": false, 00:11:49.232 "compare_and_write": false, 00:11:49.232 "abort": false, 00:11:49.232 "seek_hole": false, 00:11:49.232 "seek_data": false, 00:11:49.232 "copy": false, 00:11:49.232 "nvme_iov_md": false 00:11:49.232 }, 00:11:49.232 "memory_domains": [ 00:11:49.232 { 00:11:49.232 "dma_device_id": "system", 00:11:49.232 "dma_device_type": 1 00:11:49.232 }, 00:11:49.232 { 00:11:49.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.232 "dma_device_type": 2 00:11:49.232 }, 00:11:49.232 { 00:11:49.232 "dma_device_id": "system", 00:11:49.232 "dma_device_type": 1 00:11:49.232 }, 00:11:49.232 { 00:11:49.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.232 "dma_device_type": 2 00:11:49.232 }, 00:11:49.232 { 00:11:49.232 "dma_device_id": "system", 00:11:49.232 "dma_device_type": 1 00:11:49.232 }, 00:11:49.232 { 00:11:49.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.232 "dma_device_type": 2 00:11:49.232 } 00:11:49.232 ], 00:11:49.232 "driver_specific": { 00:11:49.232 "raid": { 00:11:49.232 "uuid": "06188a01-7950-4413-9f8e-3612f66d02ad", 00:11:49.232 "strip_size_kb": 0, 00:11:49.232 "state": "online", 00:11:49.232 "raid_level": "raid1", 00:11:49.232 "superblock": false, 00:11:49.232 "num_base_bdevs": 3, 00:11:49.232 "num_base_bdevs_discovered": 3, 00:11:49.232 "num_base_bdevs_operational": 3, 00:11:49.232 "base_bdevs_list": [ 00:11:49.232 { 00:11:49.232 "name": "NewBaseBdev", 00:11:49.232 "uuid": "88d62721-5014-4a7c-be00-cdeaaac55191", 00:11:49.232 "is_configured": true, 00:11:49.232 "data_offset": 0, 00:11:49.232 "data_size": 65536 00:11:49.232 }, 00:11:49.232 { 00:11:49.232 "name": "BaseBdev2", 00:11:49.232 "uuid": "6d8238c6-2af6-42b6-a1a7-e991db6f2cc8", 00:11:49.232 "is_configured": true, 00:11:49.232 "data_offset": 0, 00:11:49.232 "data_size": 65536 00:11:49.232 }, 00:11:49.232 { 00:11:49.232 "name": "BaseBdev3", 00:11:49.232 "uuid": "f80dab4e-686f-4e8a-b304-3a336d5a72c4", 00:11:49.232 "is_configured": true, 00:11:49.232 "data_offset": 0, 00:11:49.232 "data_size": 65536 00:11:49.232 } 00:11:49.232 ] 00:11:49.232 } 00:11:49.232 } 00:11:49.232 }' 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:49.232 BaseBdev2 00:11:49.232 BaseBdev3' 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.232 [2024-12-05 20:04:50.604982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:49.232 [2024-12-05 20:04:50.605018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.232 [2024-12-05 20:04:50.605102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.232 [2024-12-05 20:04:50.605436] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.232 [2024-12-05 20:04:50.605448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67526 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67526 ']' 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67526 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67526 00:11:49.232 killing process with pid 67526 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67526' 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67526 00:11:49.232 [2024-12-05 20:04:50.639714] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.232 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67526 00:11:49.804 [2024-12-05 20:04:50.952581] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:50.754 00:11:50.754 real 0m10.786s 00:11:50.754 user 0m17.232s 00:11:50.754 sys 0m1.864s 00:11:50.754 ************************************ 00:11:50.754 END TEST raid_state_function_test 00:11:50.754 ************************************ 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.754 20:04:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:50.754 20:04:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:50.754 20:04:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.754 20:04:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.754 ************************************ 00:11:50.754 START TEST raid_state_function_test_sb 00:11:50.754 ************************************ 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68147 00:11:50.754 Process raid pid: 68147 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68147' 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68147 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68147 ']' 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.754 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.013 [2024-12-05 20:04:52.239893] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:11:51.013 [2024-12-05 20:04:52.240009] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.013 [2024-12-05 20:04:52.417443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.271 [2024-12-05 20:04:52.537934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.530 [2024-12-05 20:04:52.746134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.530 [2024-12-05 20:04:52.746174] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.789 [2024-12-05 20:04:53.090591] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.789 [2024-12-05 20:04:53.090716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.789 [2024-12-05 20:04:53.090740] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.789 [2024-12-05 20:04:53.090752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.789 [2024-12-05 20:04:53.090759] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:51.789 [2024-12-05 20:04:53.090768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.789 "name": "Existed_Raid", 00:11:51.789 "uuid": "c8f2d44b-280c-42b6-93bc-419206b0f0e5", 00:11:51.789 "strip_size_kb": 0, 00:11:51.789 "state": "configuring", 00:11:51.789 "raid_level": "raid1", 00:11:51.789 "superblock": true, 00:11:51.789 "num_base_bdevs": 3, 00:11:51.789 "num_base_bdevs_discovered": 0, 00:11:51.789 "num_base_bdevs_operational": 3, 00:11:51.789 "base_bdevs_list": [ 00:11:51.789 { 00:11:51.789 "name": "BaseBdev1", 00:11:51.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.789 "is_configured": false, 00:11:51.789 "data_offset": 0, 00:11:51.789 "data_size": 0 00:11:51.789 }, 00:11:51.789 { 00:11:51.789 "name": "BaseBdev2", 00:11:51.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.789 "is_configured": false, 00:11:51.789 "data_offset": 0, 00:11:51.789 "data_size": 0 00:11:51.789 }, 00:11:51.789 { 00:11:51.789 "name": "BaseBdev3", 00:11:51.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.789 "is_configured": false, 00:11:51.789 "data_offset": 0, 00:11:51.789 "data_size": 0 00:11:51.789 } 00:11:51.789 ] 00:11:51.789 }' 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.789 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.380 [2024-12-05 20:04:53.537786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.380 [2024-12-05 20:04:53.537876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.380 [2024-12-05 20:04:53.545758] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:52.380 [2024-12-05 20:04:53.545852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:52.380 [2024-12-05 20:04:53.545869] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.380 [2024-12-05 20:04:53.545880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.380 [2024-12-05 20:04:53.545905] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:52.380 [2024-12-05 20:04:53.545915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.380 [2024-12-05 20:04:53.589745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.380 BaseBdev1 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.380 [ 00:11:52.380 { 00:11:52.380 "name": "BaseBdev1", 00:11:52.380 "aliases": [ 00:11:52.380 "da1c4b2e-ea48-41a3-90ca-da8f205bf169" 00:11:52.380 ], 00:11:52.380 "product_name": "Malloc disk", 00:11:52.380 "block_size": 512, 00:11:52.380 "num_blocks": 65536, 00:11:52.380 "uuid": "da1c4b2e-ea48-41a3-90ca-da8f205bf169", 00:11:52.380 "assigned_rate_limits": { 00:11:52.380 "rw_ios_per_sec": 0, 00:11:52.380 "rw_mbytes_per_sec": 0, 00:11:52.380 "r_mbytes_per_sec": 0, 00:11:52.380 "w_mbytes_per_sec": 0 00:11:52.380 }, 00:11:52.380 "claimed": true, 00:11:52.380 "claim_type": "exclusive_write", 00:11:52.380 "zoned": false, 00:11:52.380 "supported_io_types": { 00:11:52.380 "read": true, 00:11:52.380 "write": true, 00:11:52.380 "unmap": true, 00:11:52.380 "flush": true, 00:11:52.380 "reset": true, 00:11:52.380 "nvme_admin": false, 00:11:52.380 "nvme_io": false, 00:11:52.380 "nvme_io_md": false, 00:11:52.380 "write_zeroes": true, 00:11:52.380 "zcopy": true, 00:11:52.380 "get_zone_info": false, 00:11:52.380 "zone_management": false, 00:11:52.380 "zone_append": false, 00:11:52.380 "compare": false, 00:11:52.380 "compare_and_write": false, 00:11:52.380 "abort": true, 00:11:52.380 "seek_hole": false, 00:11:52.380 "seek_data": false, 00:11:52.380 "copy": true, 00:11:52.380 "nvme_iov_md": false 00:11:52.380 }, 00:11:52.380 "memory_domains": [ 00:11:52.380 { 00:11:52.380 "dma_device_id": "system", 00:11:52.380 "dma_device_type": 1 00:11:52.380 }, 00:11:52.380 { 00:11:52.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.380 "dma_device_type": 2 00:11:52.380 } 00:11:52.380 ], 00:11:52.380 "driver_specific": {} 00:11:52.380 } 00:11:52.380 ] 00:11:52.380 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.381 "name": "Existed_Raid", 00:11:52.381 "uuid": "a37e464a-8128-4a93-9775-6813d1f168a7", 00:11:52.381 "strip_size_kb": 0, 00:11:52.381 "state": "configuring", 00:11:52.381 "raid_level": "raid1", 00:11:52.381 "superblock": true, 00:11:52.381 "num_base_bdevs": 3, 00:11:52.381 "num_base_bdevs_discovered": 1, 00:11:52.381 "num_base_bdevs_operational": 3, 00:11:52.381 "base_bdevs_list": [ 00:11:52.381 { 00:11:52.381 "name": "BaseBdev1", 00:11:52.381 "uuid": "da1c4b2e-ea48-41a3-90ca-da8f205bf169", 00:11:52.381 "is_configured": true, 00:11:52.381 "data_offset": 2048, 00:11:52.381 "data_size": 63488 00:11:52.381 }, 00:11:52.381 { 00:11:52.381 "name": "BaseBdev2", 00:11:52.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.381 "is_configured": false, 00:11:52.381 "data_offset": 0, 00:11:52.381 "data_size": 0 00:11:52.381 }, 00:11:52.381 { 00:11:52.381 "name": "BaseBdev3", 00:11:52.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.381 "is_configured": false, 00:11:52.381 "data_offset": 0, 00:11:52.381 "data_size": 0 00:11:52.381 } 00:11:52.381 ] 00:11:52.381 }' 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.381 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.949 [2024-12-05 20:04:54.116924] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.949 [2024-12-05 20:04:54.117043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.949 [2024-12-05 20:04:54.124957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.949 [2024-12-05 20:04:54.126926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.949 [2024-12-05 20:04:54.126965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.949 [2024-12-05 20:04:54.126975] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:52.949 [2024-12-05 20:04:54.126984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.949 "name": "Existed_Raid", 00:11:52.949 "uuid": "1bf8d25d-d726-4b68-92ad-b5ee477ada82", 00:11:52.949 "strip_size_kb": 0, 00:11:52.949 "state": "configuring", 00:11:52.949 "raid_level": "raid1", 00:11:52.949 "superblock": true, 00:11:52.949 "num_base_bdevs": 3, 00:11:52.949 "num_base_bdevs_discovered": 1, 00:11:52.949 "num_base_bdevs_operational": 3, 00:11:52.949 "base_bdevs_list": [ 00:11:52.949 { 00:11:52.949 "name": "BaseBdev1", 00:11:52.949 "uuid": "da1c4b2e-ea48-41a3-90ca-da8f205bf169", 00:11:52.949 "is_configured": true, 00:11:52.949 "data_offset": 2048, 00:11:52.949 "data_size": 63488 00:11:52.949 }, 00:11:52.949 { 00:11:52.949 "name": "BaseBdev2", 00:11:52.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.949 "is_configured": false, 00:11:52.949 "data_offset": 0, 00:11:52.949 "data_size": 0 00:11:52.949 }, 00:11:52.949 { 00:11:52.949 "name": "BaseBdev3", 00:11:52.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.949 "is_configured": false, 00:11:52.949 "data_offset": 0, 00:11:52.949 "data_size": 0 00:11:52.949 } 00:11:52.949 ] 00:11:52.949 }' 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.949 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.209 [2024-12-05 20:04:54.602391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:53.209 BaseBdev2 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.209 [ 00:11:53.209 { 00:11:53.209 "name": "BaseBdev2", 00:11:53.209 "aliases": [ 00:11:53.209 "1eb30b56-67f2-44d6-a371-5b2f7cb6e563" 00:11:53.209 ], 00:11:53.209 "product_name": "Malloc disk", 00:11:53.209 "block_size": 512, 00:11:53.209 "num_blocks": 65536, 00:11:53.209 "uuid": "1eb30b56-67f2-44d6-a371-5b2f7cb6e563", 00:11:53.209 "assigned_rate_limits": { 00:11:53.209 "rw_ios_per_sec": 0, 00:11:53.209 "rw_mbytes_per_sec": 0, 00:11:53.209 "r_mbytes_per_sec": 0, 00:11:53.209 "w_mbytes_per_sec": 0 00:11:53.209 }, 00:11:53.209 "claimed": true, 00:11:53.209 "claim_type": "exclusive_write", 00:11:53.209 "zoned": false, 00:11:53.209 "supported_io_types": { 00:11:53.209 "read": true, 00:11:53.209 "write": true, 00:11:53.209 "unmap": true, 00:11:53.209 "flush": true, 00:11:53.209 "reset": true, 00:11:53.209 "nvme_admin": false, 00:11:53.209 "nvme_io": false, 00:11:53.209 "nvme_io_md": false, 00:11:53.209 "write_zeroes": true, 00:11:53.209 "zcopy": true, 00:11:53.209 "get_zone_info": false, 00:11:53.209 "zone_management": false, 00:11:53.209 "zone_append": false, 00:11:53.209 "compare": false, 00:11:53.209 "compare_and_write": false, 00:11:53.209 "abort": true, 00:11:53.209 "seek_hole": false, 00:11:53.209 "seek_data": false, 00:11:53.209 "copy": true, 00:11:53.209 "nvme_iov_md": false 00:11:53.209 }, 00:11:53.209 "memory_domains": [ 00:11:53.209 { 00:11:53.209 "dma_device_id": "system", 00:11:53.209 "dma_device_type": 1 00:11:53.209 }, 00:11:53.209 { 00:11:53.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.209 "dma_device_type": 2 00:11:53.209 } 00:11:53.209 ], 00:11:53.209 "driver_specific": {} 00:11:53.209 } 00:11:53.209 ] 00:11:53.209 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.468 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.468 "name": "Existed_Raid", 00:11:53.468 "uuid": "1bf8d25d-d726-4b68-92ad-b5ee477ada82", 00:11:53.468 "strip_size_kb": 0, 00:11:53.468 "state": "configuring", 00:11:53.468 "raid_level": "raid1", 00:11:53.468 "superblock": true, 00:11:53.468 "num_base_bdevs": 3, 00:11:53.468 "num_base_bdevs_discovered": 2, 00:11:53.468 "num_base_bdevs_operational": 3, 00:11:53.468 "base_bdevs_list": [ 00:11:53.468 { 00:11:53.468 "name": "BaseBdev1", 00:11:53.468 "uuid": "da1c4b2e-ea48-41a3-90ca-da8f205bf169", 00:11:53.468 "is_configured": true, 00:11:53.468 "data_offset": 2048, 00:11:53.468 "data_size": 63488 00:11:53.468 }, 00:11:53.468 { 00:11:53.468 "name": "BaseBdev2", 00:11:53.468 "uuid": "1eb30b56-67f2-44d6-a371-5b2f7cb6e563", 00:11:53.468 "is_configured": true, 00:11:53.468 "data_offset": 2048, 00:11:53.468 "data_size": 63488 00:11:53.468 }, 00:11:53.468 { 00:11:53.468 "name": "BaseBdev3", 00:11:53.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.468 "is_configured": false, 00:11:53.468 "data_offset": 0, 00:11:53.468 "data_size": 0 00:11:53.468 } 00:11:53.468 ] 00:11:53.468 }' 00:11:53.469 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.469 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.726 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:53.726 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.726 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.726 [2024-12-05 20:04:55.149588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:53.726 [2024-12-05 20:04:55.149981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:53.726 [2024-12-05 20:04:55.150049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:53.727 [2024-12-05 20:04:55.150360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:53.727 [2024-12-05 20:04:55.150573] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:53.727 [2024-12-05 20:04:55.150621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:53.727 [2024-12-05 20:04:55.150854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.727 BaseBdev3 00:11:53.727 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.727 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:53.727 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:53.727 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.727 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:53.727 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.727 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.727 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.727 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.727 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.985 [ 00:11:53.985 { 00:11:53.985 "name": "BaseBdev3", 00:11:53.985 "aliases": [ 00:11:53.985 "f20be2bf-003a-44f7-b43e-a882c7aafa1e" 00:11:53.985 ], 00:11:53.985 "product_name": "Malloc disk", 00:11:53.985 "block_size": 512, 00:11:53.985 "num_blocks": 65536, 00:11:53.985 "uuid": "f20be2bf-003a-44f7-b43e-a882c7aafa1e", 00:11:53.985 "assigned_rate_limits": { 00:11:53.985 "rw_ios_per_sec": 0, 00:11:53.985 "rw_mbytes_per_sec": 0, 00:11:53.985 "r_mbytes_per_sec": 0, 00:11:53.985 "w_mbytes_per_sec": 0 00:11:53.985 }, 00:11:53.985 "claimed": true, 00:11:53.985 "claim_type": "exclusive_write", 00:11:53.985 "zoned": false, 00:11:53.985 "supported_io_types": { 00:11:53.985 "read": true, 00:11:53.985 "write": true, 00:11:53.985 "unmap": true, 00:11:53.985 "flush": true, 00:11:53.985 "reset": true, 00:11:53.985 "nvme_admin": false, 00:11:53.985 "nvme_io": false, 00:11:53.985 "nvme_io_md": false, 00:11:53.985 "write_zeroes": true, 00:11:53.985 "zcopy": true, 00:11:53.985 "get_zone_info": false, 00:11:53.985 "zone_management": false, 00:11:53.985 "zone_append": false, 00:11:53.985 "compare": false, 00:11:53.985 "compare_and_write": false, 00:11:53.985 "abort": true, 00:11:53.985 "seek_hole": false, 00:11:53.985 "seek_data": false, 00:11:53.985 "copy": true, 00:11:53.985 "nvme_iov_md": false 00:11:53.985 }, 00:11:53.985 "memory_domains": [ 00:11:53.985 { 00:11:53.985 "dma_device_id": "system", 00:11:53.985 "dma_device_type": 1 00:11:53.985 }, 00:11:53.985 { 00:11:53.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.985 "dma_device_type": 2 00:11:53.985 } 00:11:53.985 ], 00:11:53.985 "driver_specific": {} 00:11:53.985 } 00:11:53.985 ] 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.985 "name": "Existed_Raid", 00:11:53.985 "uuid": "1bf8d25d-d726-4b68-92ad-b5ee477ada82", 00:11:53.985 "strip_size_kb": 0, 00:11:53.985 "state": "online", 00:11:53.985 "raid_level": "raid1", 00:11:53.985 "superblock": true, 00:11:53.985 "num_base_bdevs": 3, 00:11:53.985 "num_base_bdevs_discovered": 3, 00:11:53.985 "num_base_bdevs_operational": 3, 00:11:53.985 "base_bdevs_list": [ 00:11:53.985 { 00:11:53.985 "name": "BaseBdev1", 00:11:53.985 "uuid": "da1c4b2e-ea48-41a3-90ca-da8f205bf169", 00:11:53.985 "is_configured": true, 00:11:53.985 "data_offset": 2048, 00:11:53.985 "data_size": 63488 00:11:53.985 }, 00:11:53.985 { 00:11:53.985 "name": "BaseBdev2", 00:11:53.985 "uuid": "1eb30b56-67f2-44d6-a371-5b2f7cb6e563", 00:11:53.985 "is_configured": true, 00:11:53.985 "data_offset": 2048, 00:11:53.985 "data_size": 63488 00:11:53.985 }, 00:11:53.985 { 00:11:53.985 "name": "BaseBdev3", 00:11:53.985 "uuid": "f20be2bf-003a-44f7-b43e-a882c7aafa1e", 00:11:53.985 "is_configured": true, 00:11:53.985 "data_offset": 2048, 00:11:53.985 "data_size": 63488 00:11:53.985 } 00:11:53.985 ] 00:11:53.985 }' 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.985 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.244 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:54.244 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:54.244 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:54.244 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:54.244 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:54.244 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:54.244 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:54.244 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:54.244 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.244 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.244 [2024-12-05 20:04:55.621220] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.244 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.244 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:54.244 "name": "Existed_Raid", 00:11:54.244 "aliases": [ 00:11:54.244 "1bf8d25d-d726-4b68-92ad-b5ee477ada82" 00:11:54.244 ], 00:11:54.244 "product_name": "Raid Volume", 00:11:54.244 "block_size": 512, 00:11:54.244 "num_blocks": 63488, 00:11:54.244 "uuid": "1bf8d25d-d726-4b68-92ad-b5ee477ada82", 00:11:54.244 "assigned_rate_limits": { 00:11:54.244 "rw_ios_per_sec": 0, 00:11:54.244 "rw_mbytes_per_sec": 0, 00:11:54.244 "r_mbytes_per_sec": 0, 00:11:54.244 "w_mbytes_per_sec": 0 00:11:54.244 }, 00:11:54.244 "claimed": false, 00:11:54.244 "zoned": false, 00:11:54.244 "supported_io_types": { 00:11:54.244 "read": true, 00:11:54.244 "write": true, 00:11:54.244 "unmap": false, 00:11:54.244 "flush": false, 00:11:54.244 "reset": true, 00:11:54.244 "nvme_admin": false, 00:11:54.244 "nvme_io": false, 00:11:54.244 "nvme_io_md": false, 00:11:54.244 "write_zeroes": true, 00:11:54.244 "zcopy": false, 00:11:54.244 "get_zone_info": false, 00:11:54.244 "zone_management": false, 00:11:54.244 "zone_append": false, 00:11:54.244 "compare": false, 00:11:54.244 "compare_and_write": false, 00:11:54.244 "abort": false, 00:11:54.244 "seek_hole": false, 00:11:54.244 "seek_data": false, 00:11:54.244 "copy": false, 00:11:54.244 "nvme_iov_md": false 00:11:54.244 }, 00:11:54.244 "memory_domains": [ 00:11:54.244 { 00:11:54.244 "dma_device_id": "system", 00:11:54.244 "dma_device_type": 1 00:11:54.244 }, 00:11:54.244 { 00:11:54.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.244 "dma_device_type": 2 00:11:54.244 }, 00:11:54.244 { 00:11:54.244 "dma_device_id": "system", 00:11:54.244 "dma_device_type": 1 00:11:54.244 }, 00:11:54.244 { 00:11:54.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.244 "dma_device_type": 2 00:11:54.244 }, 00:11:54.244 { 00:11:54.244 "dma_device_id": "system", 00:11:54.244 "dma_device_type": 1 00:11:54.244 }, 00:11:54.244 { 00:11:54.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.244 "dma_device_type": 2 00:11:54.244 } 00:11:54.244 ], 00:11:54.244 "driver_specific": { 00:11:54.244 "raid": { 00:11:54.244 "uuid": "1bf8d25d-d726-4b68-92ad-b5ee477ada82", 00:11:54.244 "strip_size_kb": 0, 00:11:54.244 "state": "online", 00:11:54.244 "raid_level": "raid1", 00:11:54.244 "superblock": true, 00:11:54.244 "num_base_bdevs": 3, 00:11:54.244 "num_base_bdevs_discovered": 3, 00:11:54.244 "num_base_bdevs_operational": 3, 00:11:54.244 "base_bdevs_list": [ 00:11:54.244 { 00:11:54.244 "name": "BaseBdev1", 00:11:54.244 "uuid": "da1c4b2e-ea48-41a3-90ca-da8f205bf169", 00:11:54.244 "is_configured": true, 00:11:54.244 "data_offset": 2048, 00:11:54.244 "data_size": 63488 00:11:54.244 }, 00:11:54.244 { 00:11:54.244 "name": "BaseBdev2", 00:11:54.244 "uuid": "1eb30b56-67f2-44d6-a371-5b2f7cb6e563", 00:11:54.244 "is_configured": true, 00:11:54.244 "data_offset": 2048, 00:11:54.244 "data_size": 63488 00:11:54.244 }, 00:11:54.244 { 00:11:54.244 "name": "BaseBdev3", 00:11:54.244 "uuid": "f20be2bf-003a-44f7-b43e-a882c7aafa1e", 00:11:54.244 "is_configured": true, 00:11:54.244 "data_offset": 2048, 00:11:54.244 "data_size": 63488 00:11:54.244 } 00:11:54.244 ] 00:11:54.244 } 00:11:54.244 } 00:11:54.244 }' 00:11:54.244 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:54.503 BaseBdev2 00:11:54.503 BaseBdev3' 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.503 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.503 [2024-12-05 20:04:55.864495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.762 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.762 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.762 "name": "Existed_Raid", 00:11:54.762 "uuid": "1bf8d25d-d726-4b68-92ad-b5ee477ada82", 00:11:54.762 "strip_size_kb": 0, 00:11:54.762 "state": "online", 00:11:54.762 "raid_level": "raid1", 00:11:54.762 "superblock": true, 00:11:54.762 "num_base_bdevs": 3, 00:11:54.762 "num_base_bdevs_discovered": 2, 00:11:54.762 "num_base_bdevs_operational": 2, 00:11:54.762 "base_bdevs_list": [ 00:11:54.762 { 00:11:54.762 "name": null, 00:11:54.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.762 "is_configured": false, 00:11:54.762 "data_offset": 0, 00:11:54.762 "data_size": 63488 00:11:54.762 }, 00:11:54.762 { 00:11:54.762 "name": "BaseBdev2", 00:11:54.762 "uuid": "1eb30b56-67f2-44d6-a371-5b2f7cb6e563", 00:11:54.762 "is_configured": true, 00:11:54.762 "data_offset": 2048, 00:11:54.762 "data_size": 63488 00:11:54.762 }, 00:11:54.762 { 00:11:54.762 "name": "BaseBdev3", 00:11:54.762 "uuid": "f20be2bf-003a-44f7-b43e-a882c7aafa1e", 00:11:54.762 "is_configured": true, 00:11:54.762 "data_offset": 2048, 00:11:54.762 "data_size": 63488 00:11:54.762 } 00:11:54.762 ] 00:11:54.762 }' 00:11:54.762 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.762 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.021 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:55.021 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.021 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:55.021 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.021 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.021 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.021 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.021 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:55.021 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:55.021 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:55.021 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.021 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.021 [2024-12-05 20:04:56.438596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.281 [2024-12-05 20:04:56.590017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:55.281 [2024-12-05 20:04:56.590122] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.281 [2024-12-05 20:04:56.686383] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.281 [2024-12-05 20:04:56.686436] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.281 [2024-12-05 20:04:56.686448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.281 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.541 BaseBdev2 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.541 [ 00:11:55.541 { 00:11:55.541 "name": "BaseBdev2", 00:11:55.541 "aliases": [ 00:11:55.541 "f0b47efc-0d41-4a1f-aeec-cfc9b24aef34" 00:11:55.541 ], 00:11:55.541 "product_name": "Malloc disk", 00:11:55.541 "block_size": 512, 00:11:55.541 "num_blocks": 65536, 00:11:55.541 "uuid": "f0b47efc-0d41-4a1f-aeec-cfc9b24aef34", 00:11:55.541 "assigned_rate_limits": { 00:11:55.541 "rw_ios_per_sec": 0, 00:11:55.541 "rw_mbytes_per_sec": 0, 00:11:55.541 "r_mbytes_per_sec": 0, 00:11:55.541 "w_mbytes_per_sec": 0 00:11:55.541 }, 00:11:55.541 "claimed": false, 00:11:55.541 "zoned": false, 00:11:55.541 "supported_io_types": { 00:11:55.541 "read": true, 00:11:55.541 "write": true, 00:11:55.541 "unmap": true, 00:11:55.541 "flush": true, 00:11:55.541 "reset": true, 00:11:55.541 "nvme_admin": false, 00:11:55.541 "nvme_io": false, 00:11:55.541 "nvme_io_md": false, 00:11:55.541 "write_zeroes": true, 00:11:55.541 "zcopy": true, 00:11:55.541 "get_zone_info": false, 00:11:55.541 "zone_management": false, 00:11:55.541 "zone_append": false, 00:11:55.541 "compare": false, 00:11:55.541 "compare_and_write": false, 00:11:55.541 "abort": true, 00:11:55.541 "seek_hole": false, 00:11:55.541 "seek_data": false, 00:11:55.541 "copy": true, 00:11:55.541 "nvme_iov_md": false 00:11:55.541 }, 00:11:55.541 "memory_domains": [ 00:11:55.541 { 00:11:55.541 "dma_device_id": "system", 00:11:55.541 "dma_device_type": 1 00:11:55.541 }, 00:11:55.541 { 00:11:55.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.541 "dma_device_type": 2 00:11:55.541 } 00:11:55.541 ], 00:11:55.541 "driver_specific": {} 00:11:55.541 } 00:11:55.541 ] 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.541 BaseBdev3 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.541 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.541 [ 00:11:55.541 { 00:11:55.541 "name": "BaseBdev3", 00:11:55.541 "aliases": [ 00:11:55.541 "b853e1ea-49a4-4e68-9177-dfd76ce3363d" 00:11:55.541 ], 00:11:55.541 "product_name": "Malloc disk", 00:11:55.541 "block_size": 512, 00:11:55.541 "num_blocks": 65536, 00:11:55.541 "uuid": "b853e1ea-49a4-4e68-9177-dfd76ce3363d", 00:11:55.541 "assigned_rate_limits": { 00:11:55.541 "rw_ios_per_sec": 0, 00:11:55.541 "rw_mbytes_per_sec": 0, 00:11:55.541 "r_mbytes_per_sec": 0, 00:11:55.541 "w_mbytes_per_sec": 0 00:11:55.541 }, 00:11:55.541 "claimed": false, 00:11:55.541 "zoned": false, 00:11:55.541 "supported_io_types": { 00:11:55.541 "read": true, 00:11:55.541 "write": true, 00:11:55.541 "unmap": true, 00:11:55.541 "flush": true, 00:11:55.541 "reset": true, 00:11:55.541 "nvme_admin": false, 00:11:55.541 "nvme_io": false, 00:11:55.541 "nvme_io_md": false, 00:11:55.541 "write_zeroes": true, 00:11:55.541 "zcopy": true, 00:11:55.541 "get_zone_info": false, 00:11:55.541 "zone_management": false, 00:11:55.541 "zone_append": false, 00:11:55.541 "compare": false, 00:11:55.541 "compare_and_write": false, 00:11:55.541 "abort": true, 00:11:55.541 "seek_hole": false, 00:11:55.541 "seek_data": false, 00:11:55.541 "copy": true, 00:11:55.542 "nvme_iov_md": false 00:11:55.542 }, 00:11:55.542 "memory_domains": [ 00:11:55.542 { 00:11:55.542 "dma_device_id": "system", 00:11:55.542 "dma_device_type": 1 00:11:55.542 }, 00:11:55.542 { 00:11:55.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.542 "dma_device_type": 2 00:11:55.542 } 00:11:55.542 ], 00:11:55.542 "driver_specific": {} 00:11:55.542 } 00:11:55.542 ] 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.542 [2024-12-05 20:04:56.909198] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.542 [2024-12-05 20:04:56.909262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.542 [2024-12-05 20:04:56.909290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.542 [2024-12-05 20:04:56.911279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.542 "name": "Existed_Raid", 00:11:55.542 "uuid": "c90b8a11-6a3e-45f0-906f-f77e7da8c079", 00:11:55.542 "strip_size_kb": 0, 00:11:55.542 "state": "configuring", 00:11:55.542 "raid_level": "raid1", 00:11:55.542 "superblock": true, 00:11:55.542 "num_base_bdevs": 3, 00:11:55.542 "num_base_bdevs_discovered": 2, 00:11:55.542 "num_base_bdevs_operational": 3, 00:11:55.542 "base_bdevs_list": [ 00:11:55.542 { 00:11:55.542 "name": "BaseBdev1", 00:11:55.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.542 "is_configured": false, 00:11:55.542 "data_offset": 0, 00:11:55.542 "data_size": 0 00:11:55.542 }, 00:11:55.542 { 00:11:55.542 "name": "BaseBdev2", 00:11:55.542 "uuid": "f0b47efc-0d41-4a1f-aeec-cfc9b24aef34", 00:11:55.542 "is_configured": true, 00:11:55.542 "data_offset": 2048, 00:11:55.542 "data_size": 63488 00:11:55.542 }, 00:11:55.542 { 00:11:55.542 "name": "BaseBdev3", 00:11:55.542 "uuid": "b853e1ea-49a4-4e68-9177-dfd76ce3363d", 00:11:55.542 "is_configured": true, 00:11:55.542 "data_offset": 2048, 00:11:55.542 "data_size": 63488 00:11:55.542 } 00:11:55.542 ] 00:11:55.542 }' 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.542 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.111 [2024-12-05 20:04:57.352423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.111 "name": "Existed_Raid", 00:11:56.111 "uuid": "c90b8a11-6a3e-45f0-906f-f77e7da8c079", 00:11:56.111 "strip_size_kb": 0, 00:11:56.111 "state": "configuring", 00:11:56.111 "raid_level": "raid1", 00:11:56.111 "superblock": true, 00:11:56.111 "num_base_bdevs": 3, 00:11:56.111 "num_base_bdevs_discovered": 1, 00:11:56.111 "num_base_bdevs_operational": 3, 00:11:56.111 "base_bdevs_list": [ 00:11:56.111 { 00:11:56.111 "name": "BaseBdev1", 00:11:56.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.111 "is_configured": false, 00:11:56.111 "data_offset": 0, 00:11:56.111 "data_size": 0 00:11:56.111 }, 00:11:56.111 { 00:11:56.111 "name": null, 00:11:56.111 "uuid": "f0b47efc-0d41-4a1f-aeec-cfc9b24aef34", 00:11:56.111 "is_configured": false, 00:11:56.111 "data_offset": 0, 00:11:56.111 "data_size": 63488 00:11:56.111 }, 00:11:56.111 { 00:11:56.111 "name": "BaseBdev3", 00:11:56.111 "uuid": "b853e1ea-49a4-4e68-9177-dfd76ce3363d", 00:11:56.111 "is_configured": true, 00:11:56.111 "data_offset": 2048, 00:11:56.111 "data_size": 63488 00:11:56.111 } 00:11:56.111 ] 00:11:56.111 }' 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.111 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.370 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.370 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:56.370 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.370 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.639 [2024-12-05 20:04:57.882116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.639 BaseBdev1 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.639 [ 00:11:56.639 { 00:11:56.639 "name": "BaseBdev1", 00:11:56.639 "aliases": [ 00:11:56.639 "237f2f30-ff3d-47f7-8066-6a95495183bd" 00:11:56.639 ], 00:11:56.639 "product_name": "Malloc disk", 00:11:56.639 "block_size": 512, 00:11:56.639 "num_blocks": 65536, 00:11:56.639 "uuid": "237f2f30-ff3d-47f7-8066-6a95495183bd", 00:11:56.639 "assigned_rate_limits": { 00:11:56.639 "rw_ios_per_sec": 0, 00:11:56.639 "rw_mbytes_per_sec": 0, 00:11:56.639 "r_mbytes_per_sec": 0, 00:11:56.639 "w_mbytes_per_sec": 0 00:11:56.639 }, 00:11:56.639 "claimed": true, 00:11:56.639 "claim_type": "exclusive_write", 00:11:56.639 "zoned": false, 00:11:56.639 "supported_io_types": { 00:11:56.639 "read": true, 00:11:56.639 "write": true, 00:11:56.639 "unmap": true, 00:11:56.639 "flush": true, 00:11:56.639 "reset": true, 00:11:56.639 "nvme_admin": false, 00:11:56.639 "nvme_io": false, 00:11:56.639 "nvme_io_md": false, 00:11:56.639 "write_zeroes": true, 00:11:56.639 "zcopy": true, 00:11:56.639 "get_zone_info": false, 00:11:56.639 "zone_management": false, 00:11:56.639 "zone_append": false, 00:11:56.639 "compare": false, 00:11:56.639 "compare_and_write": false, 00:11:56.639 "abort": true, 00:11:56.639 "seek_hole": false, 00:11:56.639 "seek_data": false, 00:11:56.639 "copy": true, 00:11:56.639 "nvme_iov_md": false 00:11:56.639 }, 00:11:56.639 "memory_domains": [ 00:11:56.639 { 00:11:56.639 "dma_device_id": "system", 00:11:56.639 "dma_device_type": 1 00:11:56.639 }, 00:11:56.639 { 00:11:56.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.639 "dma_device_type": 2 00:11:56.639 } 00:11:56.639 ], 00:11:56.639 "driver_specific": {} 00:11:56.639 } 00:11:56.639 ] 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.639 "name": "Existed_Raid", 00:11:56.639 "uuid": "c90b8a11-6a3e-45f0-906f-f77e7da8c079", 00:11:56.639 "strip_size_kb": 0, 00:11:56.639 "state": "configuring", 00:11:56.639 "raid_level": "raid1", 00:11:56.639 "superblock": true, 00:11:56.639 "num_base_bdevs": 3, 00:11:56.639 "num_base_bdevs_discovered": 2, 00:11:56.639 "num_base_bdevs_operational": 3, 00:11:56.639 "base_bdevs_list": [ 00:11:56.639 { 00:11:56.639 "name": "BaseBdev1", 00:11:56.639 "uuid": "237f2f30-ff3d-47f7-8066-6a95495183bd", 00:11:56.639 "is_configured": true, 00:11:56.639 "data_offset": 2048, 00:11:56.639 "data_size": 63488 00:11:56.639 }, 00:11:56.639 { 00:11:56.639 "name": null, 00:11:56.639 "uuid": "f0b47efc-0d41-4a1f-aeec-cfc9b24aef34", 00:11:56.639 "is_configured": false, 00:11:56.639 "data_offset": 0, 00:11:56.639 "data_size": 63488 00:11:56.639 }, 00:11:56.639 { 00:11:56.639 "name": "BaseBdev3", 00:11:56.639 "uuid": "b853e1ea-49a4-4e68-9177-dfd76ce3363d", 00:11:56.639 "is_configured": true, 00:11:56.639 "data_offset": 2048, 00:11:56.639 "data_size": 63488 00:11:56.639 } 00:11:56.639 ] 00:11:56.639 }' 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.639 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.229 [2024-12-05 20:04:58.381304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.229 "name": "Existed_Raid", 00:11:57.229 "uuid": "c90b8a11-6a3e-45f0-906f-f77e7da8c079", 00:11:57.229 "strip_size_kb": 0, 00:11:57.229 "state": "configuring", 00:11:57.229 "raid_level": "raid1", 00:11:57.229 "superblock": true, 00:11:57.229 "num_base_bdevs": 3, 00:11:57.229 "num_base_bdevs_discovered": 1, 00:11:57.229 "num_base_bdevs_operational": 3, 00:11:57.229 "base_bdevs_list": [ 00:11:57.229 { 00:11:57.229 "name": "BaseBdev1", 00:11:57.229 "uuid": "237f2f30-ff3d-47f7-8066-6a95495183bd", 00:11:57.229 "is_configured": true, 00:11:57.229 "data_offset": 2048, 00:11:57.229 "data_size": 63488 00:11:57.229 }, 00:11:57.229 { 00:11:57.229 "name": null, 00:11:57.229 "uuid": "f0b47efc-0d41-4a1f-aeec-cfc9b24aef34", 00:11:57.229 "is_configured": false, 00:11:57.229 "data_offset": 0, 00:11:57.229 "data_size": 63488 00:11:57.229 }, 00:11:57.229 { 00:11:57.229 "name": null, 00:11:57.229 "uuid": "b853e1ea-49a4-4e68-9177-dfd76ce3363d", 00:11:57.229 "is_configured": false, 00:11:57.229 "data_offset": 0, 00:11:57.229 "data_size": 63488 00:11:57.229 } 00:11:57.229 ] 00:11:57.229 }' 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.229 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.489 [2024-12-05 20:04:58.864554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.489 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.749 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.749 "name": "Existed_Raid", 00:11:57.749 "uuid": "c90b8a11-6a3e-45f0-906f-f77e7da8c079", 00:11:57.749 "strip_size_kb": 0, 00:11:57.749 "state": "configuring", 00:11:57.749 "raid_level": "raid1", 00:11:57.749 "superblock": true, 00:11:57.749 "num_base_bdevs": 3, 00:11:57.749 "num_base_bdevs_discovered": 2, 00:11:57.749 "num_base_bdevs_operational": 3, 00:11:57.749 "base_bdevs_list": [ 00:11:57.749 { 00:11:57.749 "name": "BaseBdev1", 00:11:57.749 "uuid": "237f2f30-ff3d-47f7-8066-6a95495183bd", 00:11:57.749 "is_configured": true, 00:11:57.749 "data_offset": 2048, 00:11:57.749 "data_size": 63488 00:11:57.749 }, 00:11:57.749 { 00:11:57.749 "name": null, 00:11:57.749 "uuid": "f0b47efc-0d41-4a1f-aeec-cfc9b24aef34", 00:11:57.749 "is_configured": false, 00:11:57.749 "data_offset": 0, 00:11:57.749 "data_size": 63488 00:11:57.749 }, 00:11:57.749 { 00:11:57.749 "name": "BaseBdev3", 00:11:57.749 "uuid": "b853e1ea-49a4-4e68-9177-dfd76ce3363d", 00:11:57.749 "is_configured": true, 00:11:57.749 "data_offset": 2048, 00:11:57.749 "data_size": 63488 00:11:57.749 } 00:11:57.749 ] 00:11:57.749 }' 00:11:57.749 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.749 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.009 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.009 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.009 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:58.009 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.009 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.009 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:58.009 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:58.009 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.009 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.009 [2024-12-05 20:04:59.407642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.269 "name": "Existed_Raid", 00:11:58.269 "uuid": "c90b8a11-6a3e-45f0-906f-f77e7da8c079", 00:11:58.269 "strip_size_kb": 0, 00:11:58.269 "state": "configuring", 00:11:58.269 "raid_level": "raid1", 00:11:58.269 "superblock": true, 00:11:58.269 "num_base_bdevs": 3, 00:11:58.269 "num_base_bdevs_discovered": 1, 00:11:58.269 "num_base_bdevs_operational": 3, 00:11:58.269 "base_bdevs_list": [ 00:11:58.269 { 00:11:58.269 "name": null, 00:11:58.269 "uuid": "237f2f30-ff3d-47f7-8066-6a95495183bd", 00:11:58.269 "is_configured": false, 00:11:58.269 "data_offset": 0, 00:11:58.269 "data_size": 63488 00:11:58.269 }, 00:11:58.269 { 00:11:58.269 "name": null, 00:11:58.269 "uuid": "f0b47efc-0d41-4a1f-aeec-cfc9b24aef34", 00:11:58.269 "is_configured": false, 00:11:58.269 "data_offset": 0, 00:11:58.269 "data_size": 63488 00:11:58.269 }, 00:11:58.269 { 00:11:58.269 "name": "BaseBdev3", 00:11:58.269 "uuid": "b853e1ea-49a4-4e68-9177-dfd76ce3363d", 00:11:58.269 "is_configured": true, 00:11:58.269 "data_offset": 2048, 00:11:58.269 "data_size": 63488 00:11:58.269 } 00:11:58.269 ] 00:11:58.269 }' 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.269 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.528 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.528 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:58.528 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.528 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.528 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.787 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:58.787 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:58.787 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.787 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.787 [2024-12-05 20:04:59.993990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.787 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.787 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:58.787 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.787 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.787 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.787 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.787 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.787 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.787 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.787 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.787 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.787 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.787 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.787 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.787 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.787 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.787 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.787 "name": "Existed_Raid", 00:11:58.787 "uuid": "c90b8a11-6a3e-45f0-906f-f77e7da8c079", 00:11:58.787 "strip_size_kb": 0, 00:11:58.787 "state": "configuring", 00:11:58.787 "raid_level": "raid1", 00:11:58.787 "superblock": true, 00:11:58.787 "num_base_bdevs": 3, 00:11:58.787 "num_base_bdevs_discovered": 2, 00:11:58.787 "num_base_bdevs_operational": 3, 00:11:58.787 "base_bdevs_list": [ 00:11:58.787 { 00:11:58.787 "name": null, 00:11:58.787 "uuid": "237f2f30-ff3d-47f7-8066-6a95495183bd", 00:11:58.787 "is_configured": false, 00:11:58.787 "data_offset": 0, 00:11:58.787 "data_size": 63488 00:11:58.787 }, 00:11:58.787 { 00:11:58.787 "name": "BaseBdev2", 00:11:58.787 "uuid": "f0b47efc-0d41-4a1f-aeec-cfc9b24aef34", 00:11:58.787 "is_configured": true, 00:11:58.787 "data_offset": 2048, 00:11:58.787 "data_size": 63488 00:11:58.787 }, 00:11:58.787 { 00:11:58.787 "name": "BaseBdev3", 00:11:58.787 "uuid": "b853e1ea-49a4-4e68-9177-dfd76ce3363d", 00:11:58.787 "is_configured": true, 00:11:58.787 "data_offset": 2048, 00:11:58.787 "data_size": 63488 00:11:58.787 } 00:11:58.787 ] 00:11:58.787 }' 00:11:58.787 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.787 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.046 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:59.046 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.046 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.046 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.046 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.305 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:59.305 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.305 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:59.305 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.305 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.305 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.305 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 237f2f30-ff3d-47f7-8066-6a95495183bd 00:11:59.305 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.305 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.305 [2024-12-05 20:05:00.578559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:59.305 NewBaseBdev 00:11:59.305 [2024-12-05 20:05:00.578882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:59.305 [2024-12-05 20:05:00.578915] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:59.305 [2024-12-05 20:05:00.579166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:59.305 [2024-12-05 20:05:00.579317] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:59.305 [2024-12-05 20:05:00.579328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:59.305 [2024-12-05 20:05:00.579462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.305 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.305 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:59.305 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:59.305 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:59.305 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:59.305 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.306 [ 00:11:59.306 { 00:11:59.306 "name": "NewBaseBdev", 00:11:59.306 "aliases": [ 00:11:59.306 "237f2f30-ff3d-47f7-8066-6a95495183bd" 00:11:59.306 ], 00:11:59.306 "product_name": "Malloc disk", 00:11:59.306 "block_size": 512, 00:11:59.306 "num_blocks": 65536, 00:11:59.306 "uuid": "237f2f30-ff3d-47f7-8066-6a95495183bd", 00:11:59.306 "assigned_rate_limits": { 00:11:59.306 "rw_ios_per_sec": 0, 00:11:59.306 "rw_mbytes_per_sec": 0, 00:11:59.306 "r_mbytes_per_sec": 0, 00:11:59.306 "w_mbytes_per_sec": 0 00:11:59.306 }, 00:11:59.306 "claimed": true, 00:11:59.306 "claim_type": "exclusive_write", 00:11:59.306 "zoned": false, 00:11:59.306 "supported_io_types": { 00:11:59.306 "read": true, 00:11:59.306 "write": true, 00:11:59.306 "unmap": true, 00:11:59.306 "flush": true, 00:11:59.306 "reset": true, 00:11:59.306 "nvme_admin": false, 00:11:59.306 "nvme_io": false, 00:11:59.306 "nvme_io_md": false, 00:11:59.306 "write_zeroes": true, 00:11:59.306 "zcopy": true, 00:11:59.306 "get_zone_info": false, 00:11:59.306 "zone_management": false, 00:11:59.306 "zone_append": false, 00:11:59.306 "compare": false, 00:11:59.306 "compare_and_write": false, 00:11:59.306 "abort": true, 00:11:59.306 "seek_hole": false, 00:11:59.306 "seek_data": false, 00:11:59.306 "copy": true, 00:11:59.306 "nvme_iov_md": false 00:11:59.306 }, 00:11:59.306 "memory_domains": [ 00:11:59.306 { 00:11:59.306 "dma_device_id": "system", 00:11:59.306 "dma_device_type": 1 00:11:59.306 }, 00:11:59.306 { 00:11:59.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.306 "dma_device_type": 2 00:11:59.306 } 00:11:59.306 ], 00:11:59.306 "driver_specific": {} 00:11:59.306 } 00:11:59.306 ] 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.306 "name": "Existed_Raid", 00:11:59.306 "uuid": "c90b8a11-6a3e-45f0-906f-f77e7da8c079", 00:11:59.306 "strip_size_kb": 0, 00:11:59.306 "state": "online", 00:11:59.306 "raid_level": "raid1", 00:11:59.306 "superblock": true, 00:11:59.306 "num_base_bdevs": 3, 00:11:59.306 "num_base_bdevs_discovered": 3, 00:11:59.306 "num_base_bdevs_operational": 3, 00:11:59.306 "base_bdevs_list": [ 00:11:59.306 { 00:11:59.306 "name": "NewBaseBdev", 00:11:59.306 "uuid": "237f2f30-ff3d-47f7-8066-6a95495183bd", 00:11:59.306 "is_configured": true, 00:11:59.306 "data_offset": 2048, 00:11:59.306 "data_size": 63488 00:11:59.306 }, 00:11:59.306 { 00:11:59.306 "name": "BaseBdev2", 00:11:59.306 "uuid": "f0b47efc-0d41-4a1f-aeec-cfc9b24aef34", 00:11:59.306 "is_configured": true, 00:11:59.306 "data_offset": 2048, 00:11:59.306 "data_size": 63488 00:11:59.306 }, 00:11:59.306 { 00:11:59.306 "name": "BaseBdev3", 00:11:59.306 "uuid": "b853e1ea-49a4-4e68-9177-dfd76ce3363d", 00:11:59.306 "is_configured": true, 00:11:59.306 "data_offset": 2048, 00:11:59.306 "data_size": 63488 00:11:59.306 } 00:11:59.306 ] 00:11:59.306 }' 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.306 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.874 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.875 [2024-12-05 20:05:01.098072] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:59.875 "name": "Existed_Raid", 00:11:59.875 "aliases": [ 00:11:59.875 "c90b8a11-6a3e-45f0-906f-f77e7da8c079" 00:11:59.875 ], 00:11:59.875 "product_name": "Raid Volume", 00:11:59.875 "block_size": 512, 00:11:59.875 "num_blocks": 63488, 00:11:59.875 "uuid": "c90b8a11-6a3e-45f0-906f-f77e7da8c079", 00:11:59.875 "assigned_rate_limits": { 00:11:59.875 "rw_ios_per_sec": 0, 00:11:59.875 "rw_mbytes_per_sec": 0, 00:11:59.875 "r_mbytes_per_sec": 0, 00:11:59.875 "w_mbytes_per_sec": 0 00:11:59.875 }, 00:11:59.875 "claimed": false, 00:11:59.875 "zoned": false, 00:11:59.875 "supported_io_types": { 00:11:59.875 "read": true, 00:11:59.875 "write": true, 00:11:59.875 "unmap": false, 00:11:59.875 "flush": false, 00:11:59.875 "reset": true, 00:11:59.875 "nvme_admin": false, 00:11:59.875 "nvme_io": false, 00:11:59.875 "nvme_io_md": false, 00:11:59.875 "write_zeroes": true, 00:11:59.875 "zcopy": false, 00:11:59.875 "get_zone_info": false, 00:11:59.875 "zone_management": false, 00:11:59.875 "zone_append": false, 00:11:59.875 "compare": false, 00:11:59.875 "compare_and_write": false, 00:11:59.875 "abort": false, 00:11:59.875 "seek_hole": false, 00:11:59.875 "seek_data": false, 00:11:59.875 "copy": false, 00:11:59.875 "nvme_iov_md": false 00:11:59.875 }, 00:11:59.875 "memory_domains": [ 00:11:59.875 { 00:11:59.875 "dma_device_id": "system", 00:11:59.875 "dma_device_type": 1 00:11:59.875 }, 00:11:59.875 { 00:11:59.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.875 "dma_device_type": 2 00:11:59.875 }, 00:11:59.875 { 00:11:59.875 "dma_device_id": "system", 00:11:59.875 "dma_device_type": 1 00:11:59.875 }, 00:11:59.875 { 00:11:59.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.875 "dma_device_type": 2 00:11:59.875 }, 00:11:59.875 { 00:11:59.875 "dma_device_id": "system", 00:11:59.875 "dma_device_type": 1 00:11:59.875 }, 00:11:59.875 { 00:11:59.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.875 "dma_device_type": 2 00:11:59.875 } 00:11:59.875 ], 00:11:59.875 "driver_specific": { 00:11:59.875 "raid": { 00:11:59.875 "uuid": "c90b8a11-6a3e-45f0-906f-f77e7da8c079", 00:11:59.875 "strip_size_kb": 0, 00:11:59.875 "state": "online", 00:11:59.875 "raid_level": "raid1", 00:11:59.875 "superblock": true, 00:11:59.875 "num_base_bdevs": 3, 00:11:59.875 "num_base_bdevs_discovered": 3, 00:11:59.875 "num_base_bdevs_operational": 3, 00:11:59.875 "base_bdevs_list": [ 00:11:59.875 { 00:11:59.875 "name": "NewBaseBdev", 00:11:59.875 "uuid": "237f2f30-ff3d-47f7-8066-6a95495183bd", 00:11:59.875 "is_configured": true, 00:11:59.875 "data_offset": 2048, 00:11:59.875 "data_size": 63488 00:11:59.875 }, 00:11:59.875 { 00:11:59.875 "name": "BaseBdev2", 00:11:59.875 "uuid": "f0b47efc-0d41-4a1f-aeec-cfc9b24aef34", 00:11:59.875 "is_configured": true, 00:11:59.875 "data_offset": 2048, 00:11:59.875 "data_size": 63488 00:11:59.875 }, 00:11:59.875 { 00:11:59.875 "name": "BaseBdev3", 00:11:59.875 "uuid": "b853e1ea-49a4-4e68-9177-dfd76ce3363d", 00:11:59.875 "is_configured": true, 00:11:59.875 "data_offset": 2048, 00:11:59.875 "data_size": 63488 00:11:59.875 } 00:11:59.875 ] 00:11:59.875 } 00:11:59.875 } 00:11:59.875 }' 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:59.875 BaseBdev2 00:11:59.875 BaseBdev3' 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.875 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.134 [2024-12-05 20:05:01.369311] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:00.134 [2024-12-05 20:05:01.369409] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.134 [2024-12-05 20:05:01.369518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.134 [2024-12-05 20:05:01.369838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.134 [2024-12-05 20:05:01.369849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68147 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68147 ']' 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68147 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68147 00:12:00.134 killing process with pid 68147 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68147' 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68147 00:12:00.134 [2024-12-05 20:05:01.408714] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:00.134 20:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68147 00:12:00.393 [2024-12-05 20:05:01.725348] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:01.772 ************************************ 00:12:01.772 END TEST raid_state_function_test_sb 00:12:01.772 ************************************ 00:12:01.772 20:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:01.773 00:12:01.773 real 0m10.768s 00:12:01.773 user 0m17.162s 00:12:01.773 sys 0m1.784s 00:12:01.773 20:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.773 20:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.773 20:05:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:01.773 20:05:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:01.773 20:05:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.773 20:05:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:01.773 ************************************ 00:12:01.773 START TEST raid_superblock_test 00:12:01.773 ************************************ 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:01.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68773 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68773 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68773 ']' 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.773 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:01.773 [2024-12-05 20:05:03.059223] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:12:01.773 [2024-12-05 20:05:03.059467] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68773 ] 00:12:02.033 [2024-12-05 20:05:03.234845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.033 [2024-12-05 20:05:03.356485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.292 [2024-12-05 20:05:03.573532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.293 [2024-12-05 20:05:03.573684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.553 malloc1 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.553 [2024-12-05 20:05:03.957237] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:02.553 [2024-12-05 20:05:03.957358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.553 [2024-12-05 20:05:03.957409] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:02.553 [2024-12-05 20:05:03.957453] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.553 [2024-12-05 20:05:03.959789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.553 [2024-12-05 20:05:03.959872] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:02.553 pt1 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.553 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.813 malloc2 00:12:02.813 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.813 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:02.813 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.813 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.813 [2024-12-05 20:05:04.019431] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:02.813 [2024-12-05 20:05:04.019492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.813 [2024-12-05 20:05:04.019519] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:02.813 [2024-12-05 20:05:04.019529] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.813 [2024-12-05 20:05:04.021790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.813 [2024-12-05 20:05:04.021829] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:02.813 pt2 00:12:02.813 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.813 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:02.813 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:02.813 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:02.813 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:02.813 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:02.813 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.814 malloc3 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.814 [2024-12-05 20:05:04.087489] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:02.814 [2024-12-05 20:05:04.087599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.814 [2024-12-05 20:05:04.087655] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:02.814 [2024-12-05 20:05:04.087693] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.814 [2024-12-05 20:05:04.090063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.814 [2024-12-05 20:05:04.090139] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:02.814 pt3 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.814 [2024-12-05 20:05:04.099505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:02.814 [2024-12-05 20:05:04.101497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:02.814 [2024-12-05 20:05:04.101640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:02.814 [2024-12-05 20:05:04.101895] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:02.814 [2024-12-05 20:05:04.101962] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:02.814 [2024-12-05 20:05:04.102271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:02.814 [2024-12-05 20:05:04.102495] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:02.814 [2024-12-05 20:05:04.102545] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:02.814 [2024-12-05 20:05:04.102750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.814 "name": "raid_bdev1", 00:12:02.814 "uuid": "64f80e73-c9a5-4cf3-bfd7-58a41e2572fb", 00:12:02.814 "strip_size_kb": 0, 00:12:02.814 "state": "online", 00:12:02.814 "raid_level": "raid1", 00:12:02.814 "superblock": true, 00:12:02.814 "num_base_bdevs": 3, 00:12:02.814 "num_base_bdevs_discovered": 3, 00:12:02.814 "num_base_bdevs_operational": 3, 00:12:02.814 "base_bdevs_list": [ 00:12:02.814 { 00:12:02.814 "name": "pt1", 00:12:02.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:02.814 "is_configured": true, 00:12:02.814 "data_offset": 2048, 00:12:02.814 "data_size": 63488 00:12:02.814 }, 00:12:02.814 { 00:12:02.814 "name": "pt2", 00:12:02.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.814 "is_configured": true, 00:12:02.814 "data_offset": 2048, 00:12:02.814 "data_size": 63488 00:12:02.814 }, 00:12:02.814 { 00:12:02.814 "name": "pt3", 00:12:02.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:02.814 "is_configured": true, 00:12:02.814 "data_offset": 2048, 00:12:02.814 "data_size": 63488 00:12:02.814 } 00:12:02.814 ] 00:12:02.814 }' 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.814 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.074 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:03.074 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:03.074 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:03.074 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:03.074 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:03.074 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:03.074 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:03.074 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.074 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.074 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:03.333 [2024-12-05 20:05:04.511193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.333 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.333 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:03.333 "name": "raid_bdev1", 00:12:03.333 "aliases": [ 00:12:03.333 "64f80e73-c9a5-4cf3-bfd7-58a41e2572fb" 00:12:03.333 ], 00:12:03.333 "product_name": "Raid Volume", 00:12:03.333 "block_size": 512, 00:12:03.333 "num_blocks": 63488, 00:12:03.333 "uuid": "64f80e73-c9a5-4cf3-bfd7-58a41e2572fb", 00:12:03.333 "assigned_rate_limits": { 00:12:03.333 "rw_ios_per_sec": 0, 00:12:03.333 "rw_mbytes_per_sec": 0, 00:12:03.333 "r_mbytes_per_sec": 0, 00:12:03.333 "w_mbytes_per_sec": 0 00:12:03.333 }, 00:12:03.333 "claimed": false, 00:12:03.333 "zoned": false, 00:12:03.333 "supported_io_types": { 00:12:03.333 "read": true, 00:12:03.333 "write": true, 00:12:03.333 "unmap": false, 00:12:03.333 "flush": false, 00:12:03.333 "reset": true, 00:12:03.333 "nvme_admin": false, 00:12:03.333 "nvme_io": false, 00:12:03.333 "nvme_io_md": false, 00:12:03.333 "write_zeroes": true, 00:12:03.333 "zcopy": false, 00:12:03.333 "get_zone_info": false, 00:12:03.333 "zone_management": false, 00:12:03.333 "zone_append": false, 00:12:03.333 "compare": false, 00:12:03.333 "compare_and_write": false, 00:12:03.333 "abort": false, 00:12:03.333 "seek_hole": false, 00:12:03.333 "seek_data": false, 00:12:03.333 "copy": false, 00:12:03.333 "nvme_iov_md": false 00:12:03.333 }, 00:12:03.333 "memory_domains": [ 00:12:03.333 { 00:12:03.333 "dma_device_id": "system", 00:12:03.333 "dma_device_type": 1 00:12:03.333 }, 00:12:03.333 { 00:12:03.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.334 "dma_device_type": 2 00:12:03.334 }, 00:12:03.334 { 00:12:03.334 "dma_device_id": "system", 00:12:03.334 "dma_device_type": 1 00:12:03.334 }, 00:12:03.334 { 00:12:03.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.334 "dma_device_type": 2 00:12:03.334 }, 00:12:03.334 { 00:12:03.334 "dma_device_id": "system", 00:12:03.334 "dma_device_type": 1 00:12:03.334 }, 00:12:03.334 { 00:12:03.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.334 "dma_device_type": 2 00:12:03.334 } 00:12:03.334 ], 00:12:03.334 "driver_specific": { 00:12:03.334 "raid": { 00:12:03.334 "uuid": "64f80e73-c9a5-4cf3-bfd7-58a41e2572fb", 00:12:03.334 "strip_size_kb": 0, 00:12:03.334 "state": "online", 00:12:03.334 "raid_level": "raid1", 00:12:03.334 "superblock": true, 00:12:03.334 "num_base_bdevs": 3, 00:12:03.334 "num_base_bdevs_discovered": 3, 00:12:03.334 "num_base_bdevs_operational": 3, 00:12:03.334 "base_bdevs_list": [ 00:12:03.334 { 00:12:03.334 "name": "pt1", 00:12:03.334 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:03.334 "is_configured": true, 00:12:03.334 "data_offset": 2048, 00:12:03.334 "data_size": 63488 00:12:03.334 }, 00:12:03.334 { 00:12:03.334 "name": "pt2", 00:12:03.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.334 "is_configured": true, 00:12:03.334 "data_offset": 2048, 00:12:03.334 "data_size": 63488 00:12:03.334 }, 00:12:03.334 { 00:12:03.334 "name": "pt3", 00:12:03.334 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:03.334 "is_configured": true, 00:12:03.334 "data_offset": 2048, 00:12:03.334 "data_size": 63488 00:12:03.334 } 00:12:03.334 ] 00:12:03.334 } 00:12:03.334 } 00:12:03.334 }' 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:03.334 pt2 00:12:03.334 pt3' 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.334 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.593 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.594 [2024-12-05 20:05:04.814631] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=64f80e73-c9a5-4cf3-bfd7-58a41e2572fb 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 64f80e73-c9a5-4cf3-bfd7-58a41e2572fb ']' 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.594 [2024-12-05 20:05:04.854211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.594 [2024-12-05 20:05:04.854245] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.594 [2024-12-05 20:05:04.854332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.594 [2024-12-05 20:05:04.854431] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.594 [2024-12-05 20:05:04.854452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.594 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.594 [2024-12-05 20:05:04.998021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:03.594 [2024-12-05 20:05:04.999965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:03.594 [2024-12-05 20:05:05.000027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:03.594 [2024-12-05 20:05:05.000096] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:03.594 [2024-12-05 20:05:05.000161] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:03.594 [2024-12-05 20:05:05.000188] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:03.594 [2024-12-05 20:05:05.000230] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.594 [2024-12-05 20:05:05.000245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:03.594 request: 00:12:03.594 { 00:12:03.594 "name": "raid_bdev1", 00:12:03.594 "raid_level": "raid1", 00:12:03.594 "base_bdevs": [ 00:12:03.594 "malloc1", 00:12:03.594 "malloc2", 00:12:03.594 "malloc3" 00:12:03.594 ], 00:12:03.594 "superblock": false, 00:12:03.594 "method": "bdev_raid_create", 00:12:03.594 "req_id": 1 00:12:03.594 } 00:12:03.594 Got JSON-RPC error response 00:12:03.594 response: 00:12:03.594 { 00:12:03.594 "code": -17, 00:12:03.594 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:03.594 } 00:12:03.594 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:03.594 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:03.594 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:03.594 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:03.594 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:03.594 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.594 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.594 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.594 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:03.595 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.859 [2024-12-05 20:05:05.061860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:03.859 [2024-12-05 20:05:05.061931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.859 [2024-12-05 20:05:05.061951] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:03.859 [2024-12-05 20:05:05.061962] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.859 [2024-12-05 20:05:05.064365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.859 [2024-12-05 20:05:05.064406] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:03.859 [2024-12-05 20:05:05.064499] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:03.859 [2024-12-05 20:05:05.064564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:03.859 pt1 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.859 "name": "raid_bdev1", 00:12:03.859 "uuid": "64f80e73-c9a5-4cf3-bfd7-58a41e2572fb", 00:12:03.859 "strip_size_kb": 0, 00:12:03.859 "state": "configuring", 00:12:03.859 "raid_level": "raid1", 00:12:03.859 "superblock": true, 00:12:03.859 "num_base_bdevs": 3, 00:12:03.859 "num_base_bdevs_discovered": 1, 00:12:03.859 "num_base_bdevs_operational": 3, 00:12:03.859 "base_bdevs_list": [ 00:12:03.859 { 00:12:03.859 "name": "pt1", 00:12:03.859 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:03.859 "is_configured": true, 00:12:03.859 "data_offset": 2048, 00:12:03.859 "data_size": 63488 00:12:03.859 }, 00:12:03.859 { 00:12:03.859 "name": null, 00:12:03.859 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.859 "is_configured": false, 00:12:03.859 "data_offset": 2048, 00:12:03.859 "data_size": 63488 00:12:03.859 }, 00:12:03.859 { 00:12:03.859 "name": null, 00:12:03.859 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:03.859 "is_configured": false, 00:12:03.859 "data_offset": 2048, 00:12:03.859 "data_size": 63488 00:12:03.859 } 00:12:03.859 ] 00:12:03.859 }' 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.859 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.119 [2024-12-05 20:05:05.533104] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:04.119 [2024-12-05 20:05:05.533175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.119 [2024-12-05 20:05:05.533215] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:04.119 [2024-12-05 20:05:05.533229] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.119 [2024-12-05 20:05:05.533716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.119 [2024-12-05 20:05:05.533751] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:04.119 [2024-12-05 20:05:05.533861] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:04.119 [2024-12-05 20:05:05.533906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:04.119 pt2 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.119 [2024-12-05 20:05:05.541082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.119 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.379 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.379 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.379 "name": "raid_bdev1", 00:12:04.379 "uuid": "64f80e73-c9a5-4cf3-bfd7-58a41e2572fb", 00:12:04.379 "strip_size_kb": 0, 00:12:04.379 "state": "configuring", 00:12:04.379 "raid_level": "raid1", 00:12:04.379 "superblock": true, 00:12:04.379 "num_base_bdevs": 3, 00:12:04.379 "num_base_bdevs_discovered": 1, 00:12:04.379 "num_base_bdevs_operational": 3, 00:12:04.379 "base_bdevs_list": [ 00:12:04.379 { 00:12:04.379 "name": "pt1", 00:12:04.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:04.379 "is_configured": true, 00:12:04.379 "data_offset": 2048, 00:12:04.379 "data_size": 63488 00:12:04.379 }, 00:12:04.379 { 00:12:04.379 "name": null, 00:12:04.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.379 "is_configured": false, 00:12:04.379 "data_offset": 0, 00:12:04.379 "data_size": 63488 00:12:04.379 }, 00:12:04.379 { 00:12:04.379 "name": null, 00:12:04.379 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.379 "is_configured": false, 00:12:04.379 "data_offset": 2048, 00:12:04.379 "data_size": 63488 00:12:04.379 } 00:12:04.379 ] 00:12:04.379 }' 00:12:04.379 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.379 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.638 [2024-12-05 20:05:06.016302] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:04.638 [2024-12-05 20:05:06.016386] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.638 [2024-12-05 20:05:06.016427] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:04.638 [2024-12-05 20:05:06.016442] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.638 [2024-12-05 20:05:06.016996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.638 [2024-12-05 20:05:06.017038] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:04.638 [2024-12-05 20:05:06.017159] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:04.638 [2024-12-05 20:05:06.017214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:04.638 pt2 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.638 [2024-12-05 20:05:06.028307] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:04.638 [2024-12-05 20:05:06.028370] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.638 [2024-12-05 20:05:06.028389] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:04.638 [2024-12-05 20:05:06.028401] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.638 [2024-12-05 20:05:06.028914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.638 [2024-12-05 20:05:06.028942] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:04.638 [2024-12-05 20:05:06.029032] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:04.638 [2024-12-05 20:05:06.029059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:04.638 [2024-12-05 20:05:06.029210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:04.638 [2024-12-05 20:05:06.029225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:04.638 [2024-12-05 20:05:06.029526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:04.638 [2024-12-05 20:05:06.029734] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:04.638 [2024-12-05 20:05:06.029751] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:04.638 [2024-12-05 20:05:06.029968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.638 pt3 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.638 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.899 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.899 "name": "raid_bdev1", 00:12:04.899 "uuid": "64f80e73-c9a5-4cf3-bfd7-58a41e2572fb", 00:12:04.899 "strip_size_kb": 0, 00:12:04.899 "state": "online", 00:12:04.899 "raid_level": "raid1", 00:12:04.899 "superblock": true, 00:12:04.899 "num_base_bdevs": 3, 00:12:04.899 "num_base_bdevs_discovered": 3, 00:12:04.899 "num_base_bdevs_operational": 3, 00:12:04.899 "base_bdevs_list": [ 00:12:04.899 { 00:12:04.899 "name": "pt1", 00:12:04.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:04.899 "is_configured": true, 00:12:04.899 "data_offset": 2048, 00:12:04.899 "data_size": 63488 00:12:04.899 }, 00:12:04.899 { 00:12:04.899 "name": "pt2", 00:12:04.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.899 "is_configured": true, 00:12:04.899 "data_offset": 2048, 00:12:04.899 "data_size": 63488 00:12:04.899 }, 00:12:04.899 { 00:12:04.899 "name": "pt3", 00:12:04.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.899 "is_configured": true, 00:12:04.899 "data_offset": 2048, 00:12:04.899 "data_size": 63488 00:12:04.899 } 00:12:04.899 ] 00:12:04.899 }' 00:12:04.899 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.899 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.159 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:05.159 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:05.159 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.159 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.159 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.159 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.159 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.159 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:05.159 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.159 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.159 [2024-12-05 20:05:06.499909] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.159 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.159 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.159 "name": "raid_bdev1", 00:12:05.159 "aliases": [ 00:12:05.159 "64f80e73-c9a5-4cf3-bfd7-58a41e2572fb" 00:12:05.159 ], 00:12:05.159 "product_name": "Raid Volume", 00:12:05.159 "block_size": 512, 00:12:05.159 "num_blocks": 63488, 00:12:05.159 "uuid": "64f80e73-c9a5-4cf3-bfd7-58a41e2572fb", 00:12:05.159 "assigned_rate_limits": { 00:12:05.159 "rw_ios_per_sec": 0, 00:12:05.159 "rw_mbytes_per_sec": 0, 00:12:05.159 "r_mbytes_per_sec": 0, 00:12:05.159 "w_mbytes_per_sec": 0 00:12:05.159 }, 00:12:05.159 "claimed": false, 00:12:05.159 "zoned": false, 00:12:05.159 "supported_io_types": { 00:12:05.159 "read": true, 00:12:05.159 "write": true, 00:12:05.159 "unmap": false, 00:12:05.159 "flush": false, 00:12:05.159 "reset": true, 00:12:05.160 "nvme_admin": false, 00:12:05.160 "nvme_io": false, 00:12:05.160 "nvme_io_md": false, 00:12:05.160 "write_zeroes": true, 00:12:05.160 "zcopy": false, 00:12:05.160 "get_zone_info": false, 00:12:05.160 "zone_management": false, 00:12:05.160 "zone_append": false, 00:12:05.160 "compare": false, 00:12:05.160 "compare_and_write": false, 00:12:05.160 "abort": false, 00:12:05.160 "seek_hole": false, 00:12:05.160 "seek_data": false, 00:12:05.160 "copy": false, 00:12:05.160 "nvme_iov_md": false 00:12:05.160 }, 00:12:05.160 "memory_domains": [ 00:12:05.160 { 00:12:05.160 "dma_device_id": "system", 00:12:05.160 "dma_device_type": 1 00:12:05.160 }, 00:12:05.160 { 00:12:05.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.160 "dma_device_type": 2 00:12:05.160 }, 00:12:05.160 { 00:12:05.160 "dma_device_id": "system", 00:12:05.160 "dma_device_type": 1 00:12:05.160 }, 00:12:05.160 { 00:12:05.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.160 "dma_device_type": 2 00:12:05.160 }, 00:12:05.160 { 00:12:05.160 "dma_device_id": "system", 00:12:05.160 "dma_device_type": 1 00:12:05.160 }, 00:12:05.160 { 00:12:05.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.160 "dma_device_type": 2 00:12:05.160 } 00:12:05.160 ], 00:12:05.160 "driver_specific": { 00:12:05.160 "raid": { 00:12:05.160 "uuid": "64f80e73-c9a5-4cf3-bfd7-58a41e2572fb", 00:12:05.160 "strip_size_kb": 0, 00:12:05.160 "state": "online", 00:12:05.160 "raid_level": "raid1", 00:12:05.160 "superblock": true, 00:12:05.160 "num_base_bdevs": 3, 00:12:05.160 "num_base_bdevs_discovered": 3, 00:12:05.160 "num_base_bdevs_operational": 3, 00:12:05.160 "base_bdevs_list": [ 00:12:05.160 { 00:12:05.160 "name": "pt1", 00:12:05.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.160 "is_configured": true, 00:12:05.160 "data_offset": 2048, 00:12:05.160 "data_size": 63488 00:12:05.160 }, 00:12:05.160 { 00:12:05.160 "name": "pt2", 00:12:05.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.160 "is_configured": true, 00:12:05.160 "data_offset": 2048, 00:12:05.160 "data_size": 63488 00:12:05.160 }, 00:12:05.160 { 00:12:05.160 "name": "pt3", 00:12:05.160 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.160 "is_configured": true, 00:12:05.160 "data_offset": 2048, 00:12:05.160 "data_size": 63488 00:12:05.160 } 00:12:05.160 ] 00:12:05.160 } 00:12:05.160 } 00:12:05.160 }' 00:12:05.160 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.160 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:05.160 pt2 00:12:05.160 pt3' 00:12:05.160 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.420 [2024-12-05 20:05:06.763449] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 64f80e73-c9a5-4cf3-bfd7-58a41e2572fb '!=' 64f80e73-c9a5-4cf3-bfd7-58a41e2572fb ']' 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.420 [2024-12-05 20:05:06.811070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.420 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.679 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.680 "name": "raid_bdev1", 00:12:05.680 "uuid": "64f80e73-c9a5-4cf3-bfd7-58a41e2572fb", 00:12:05.680 "strip_size_kb": 0, 00:12:05.680 "state": "online", 00:12:05.680 "raid_level": "raid1", 00:12:05.680 "superblock": true, 00:12:05.680 "num_base_bdevs": 3, 00:12:05.680 "num_base_bdevs_discovered": 2, 00:12:05.680 "num_base_bdevs_operational": 2, 00:12:05.680 "base_bdevs_list": [ 00:12:05.680 { 00:12:05.680 "name": null, 00:12:05.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.680 "is_configured": false, 00:12:05.680 "data_offset": 0, 00:12:05.680 "data_size": 63488 00:12:05.680 }, 00:12:05.680 { 00:12:05.680 "name": "pt2", 00:12:05.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.680 "is_configured": true, 00:12:05.680 "data_offset": 2048, 00:12:05.680 "data_size": 63488 00:12:05.680 }, 00:12:05.680 { 00:12:05.680 "name": "pt3", 00:12:05.680 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.680 "is_configured": true, 00:12:05.680 "data_offset": 2048, 00:12:05.680 "data_size": 63488 00:12:05.680 } 00:12:05.680 ] 00:12:05.680 }' 00:12:05.680 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.680 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.939 [2024-12-05 20:05:07.254427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:05.939 [2024-12-05 20:05:07.254459] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.939 [2024-12-05 20:05:07.254541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.939 [2024-12-05 20:05:07.254608] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.939 [2024-12-05 20:05:07.254642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.939 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.939 [2024-12-05 20:05:07.338240] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:05.939 [2024-12-05 20:05:07.338297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.939 [2024-12-05 20:05:07.338313] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:05.939 [2024-12-05 20:05:07.338325] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.939 [2024-12-05 20:05:07.340704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.939 [2024-12-05 20:05:07.340750] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:05.939 [2024-12-05 20:05:07.340834] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:05.939 [2024-12-05 20:05:07.340908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:05.939 pt2 00:12:05.940 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.940 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:05.940 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.940 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.940 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.940 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.940 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:05.940 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.940 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.940 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.940 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.940 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.940 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.940 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.940 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.940 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.200 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.200 "name": "raid_bdev1", 00:12:06.200 "uuid": "64f80e73-c9a5-4cf3-bfd7-58a41e2572fb", 00:12:06.200 "strip_size_kb": 0, 00:12:06.200 "state": "configuring", 00:12:06.200 "raid_level": "raid1", 00:12:06.200 "superblock": true, 00:12:06.200 "num_base_bdevs": 3, 00:12:06.200 "num_base_bdevs_discovered": 1, 00:12:06.200 "num_base_bdevs_operational": 2, 00:12:06.200 "base_bdevs_list": [ 00:12:06.200 { 00:12:06.200 "name": null, 00:12:06.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.200 "is_configured": false, 00:12:06.200 "data_offset": 2048, 00:12:06.200 "data_size": 63488 00:12:06.200 }, 00:12:06.200 { 00:12:06.200 "name": "pt2", 00:12:06.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.200 "is_configured": true, 00:12:06.200 "data_offset": 2048, 00:12:06.200 "data_size": 63488 00:12:06.200 }, 00:12:06.200 { 00:12:06.200 "name": null, 00:12:06.200 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.200 "is_configured": false, 00:12:06.200 "data_offset": 2048, 00:12:06.200 "data_size": 63488 00:12:06.200 } 00:12:06.200 ] 00:12:06.200 }' 00:12:06.200 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.200 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.461 [2024-12-05 20:05:07.785543] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:06.461 [2024-12-05 20:05:07.785626] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.461 [2024-12-05 20:05:07.785648] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:06.461 [2024-12-05 20:05:07.785662] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.461 [2024-12-05 20:05:07.786243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.461 [2024-12-05 20:05:07.786285] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:06.461 [2024-12-05 20:05:07.786407] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:06.461 [2024-12-05 20:05:07.786448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:06.461 [2024-12-05 20:05:07.786599] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:06.461 [2024-12-05 20:05:07.786619] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:06.461 [2024-12-05 20:05:07.786945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:06.461 [2024-12-05 20:05:07.787159] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:06.461 [2024-12-05 20:05:07.787181] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:06.461 [2024-12-05 20:05:07.787359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.461 pt3 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.461 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.462 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.462 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.462 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.462 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.462 "name": "raid_bdev1", 00:12:06.462 "uuid": "64f80e73-c9a5-4cf3-bfd7-58a41e2572fb", 00:12:06.462 "strip_size_kb": 0, 00:12:06.462 "state": "online", 00:12:06.462 "raid_level": "raid1", 00:12:06.462 "superblock": true, 00:12:06.462 "num_base_bdevs": 3, 00:12:06.462 "num_base_bdevs_discovered": 2, 00:12:06.462 "num_base_bdevs_operational": 2, 00:12:06.462 "base_bdevs_list": [ 00:12:06.462 { 00:12:06.462 "name": null, 00:12:06.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.462 "is_configured": false, 00:12:06.462 "data_offset": 2048, 00:12:06.462 "data_size": 63488 00:12:06.462 }, 00:12:06.462 { 00:12:06.462 "name": "pt2", 00:12:06.462 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.462 "is_configured": true, 00:12:06.462 "data_offset": 2048, 00:12:06.462 "data_size": 63488 00:12:06.462 }, 00:12:06.462 { 00:12:06.462 "name": "pt3", 00:12:06.462 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.462 "is_configured": true, 00:12:06.462 "data_offset": 2048, 00:12:06.462 "data_size": 63488 00:12:06.462 } 00:12:06.462 ] 00:12:06.462 }' 00:12:06.462 20:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.462 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.031 [2024-12-05 20:05:08.260720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.031 [2024-12-05 20:05:08.260757] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.031 [2024-12-05 20:05:08.260853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.031 [2024-12-05 20:05:08.260953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.031 [2024-12-05 20:05:08.260969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.031 [2024-12-05 20:05:08.320620] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:07.031 [2024-12-05 20:05:08.320684] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.031 [2024-12-05 20:05:08.320704] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:07.031 [2024-12-05 20:05:08.320715] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.031 [2024-12-05 20:05:08.323189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.031 [2024-12-05 20:05:08.323227] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:07.031 [2024-12-05 20:05:08.323317] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:07.031 [2024-12-05 20:05:08.323375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:07.031 [2024-12-05 20:05:08.323519] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:07.031 [2024-12-05 20:05:08.323541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.031 [2024-12-05 20:05:08.323563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:07.031 [2024-12-05 20:05:08.323639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:07.031 pt1 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.031 "name": "raid_bdev1", 00:12:07.031 "uuid": "64f80e73-c9a5-4cf3-bfd7-58a41e2572fb", 00:12:07.031 "strip_size_kb": 0, 00:12:07.031 "state": "configuring", 00:12:07.031 "raid_level": "raid1", 00:12:07.031 "superblock": true, 00:12:07.031 "num_base_bdevs": 3, 00:12:07.031 "num_base_bdevs_discovered": 1, 00:12:07.031 "num_base_bdevs_operational": 2, 00:12:07.031 "base_bdevs_list": [ 00:12:07.031 { 00:12:07.031 "name": null, 00:12:07.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.031 "is_configured": false, 00:12:07.031 "data_offset": 2048, 00:12:07.031 "data_size": 63488 00:12:07.031 }, 00:12:07.031 { 00:12:07.031 "name": "pt2", 00:12:07.031 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.031 "is_configured": true, 00:12:07.031 "data_offset": 2048, 00:12:07.031 "data_size": 63488 00:12:07.031 }, 00:12:07.031 { 00:12:07.031 "name": null, 00:12:07.031 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.031 "is_configured": false, 00:12:07.031 "data_offset": 2048, 00:12:07.031 "data_size": 63488 00:12:07.031 } 00:12:07.031 ] 00:12:07.031 }' 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.031 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.601 [2024-12-05 20:05:08.811818] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:07.601 [2024-12-05 20:05:08.811896] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.601 [2024-12-05 20:05:08.811920] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:07.601 [2024-12-05 20:05:08.811929] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.601 [2024-12-05 20:05:08.812482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.601 [2024-12-05 20:05:08.812513] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:07.601 [2024-12-05 20:05:08.812614] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:07.601 [2024-12-05 20:05:08.812649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:07.601 [2024-12-05 20:05:08.812794] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:07.601 [2024-12-05 20:05:08.812813] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:07.601 [2024-12-05 20:05:08.813111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:07.601 [2024-12-05 20:05:08.813308] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:07.601 [2024-12-05 20:05:08.813335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:07.601 [2024-12-05 20:05:08.813493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.601 pt3 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.601 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.601 "name": "raid_bdev1", 00:12:07.601 "uuid": "64f80e73-c9a5-4cf3-bfd7-58a41e2572fb", 00:12:07.601 "strip_size_kb": 0, 00:12:07.601 "state": "online", 00:12:07.601 "raid_level": "raid1", 00:12:07.601 "superblock": true, 00:12:07.601 "num_base_bdevs": 3, 00:12:07.601 "num_base_bdevs_discovered": 2, 00:12:07.601 "num_base_bdevs_operational": 2, 00:12:07.601 "base_bdevs_list": [ 00:12:07.601 { 00:12:07.601 "name": null, 00:12:07.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.601 "is_configured": false, 00:12:07.601 "data_offset": 2048, 00:12:07.601 "data_size": 63488 00:12:07.601 }, 00:12:07.601 { 00:12:07.601 "name": "pt2", 00:12:07.601 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.601 "is_configured": true, 00:12:07.602 "data_offset": 2048, 00:12:07.602 "data_size": 63488 00:12:07.602 }, 00:12:07.602 { 00:12:07.602 "name": "pt3", 00:12:07.602 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.602 "is_configured": true, 00:12:07.602 "data_offset": 2048, 00:12:07.602 "data_size": 63488 00:12:07.602 } 00:12:07.602 ] 00:12:07.602 }' 00:12:07.602 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.602 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.861 20:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:07.861 20:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:07.861 20:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.861 20:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.861 20:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.861 20:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:07.861 20:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:07.861 20:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.861 20:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:07.861 20:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.861 [2024-12-05 20:05:09.295322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.120 20:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.120 20:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 64f80e73-c9a5-4cf3-bfd7-58a41e2572fb '!=' 64f80e73-c9a5-4cf3-bfd7-58a41e2572fb ']' 00:12:08.120 20:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68773 00:12:08.120 20:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68773 ']' 00:12:08.120 20:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68773 00:12:08.120 20:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:08.120 20:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.120 20:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68773 00:12:08.120 20:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:08.121 20:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:08.121 20:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68773' 00:12:08.121 killing process with pid 68773 00:12:08.121 20:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68773 00:12:08.121 [2024-12-05 20:05:09.367011] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:08.121 [2024-12-05 20:05:09.367122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.121 20:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68773 00:12:08.121 [2024-12-05 20:05:09.367213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.121 [2024-12-05 20:05:09.367240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:08.380 [2024-12-05 20:05:09.684873] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.759 20:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:09.759 00:12:09.759 real 0m7.872s 00:12:09.759 user 0m12.371s 00:12:09.759 sys 0m1.317s 00:12:09.759 20:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.759 20:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.759 ************************************ 00:12:09.759 END TEST raid_superblock_test 00:12:09.759 ************************************ 00:12:09.759 20:05:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:12:09.759 20:05:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:09.759 20:05:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.759 20:05:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.759 ************************************ 00:12:09.759 START TEST raid_read_error_test 00:12:09.759 ************************************ 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QKR8SO6w9j 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69213 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69213 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69213 ']' 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.759 20:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.759 [2024-12-05 20:05:11.022909] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:12:09.759 [2024-12-05 20:05:11.023035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69213 ] 00:12:09.759 [2024-12-05 20:05:11.178149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.018 [2024-12-05 20:05:11.293499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.276 [2024-12-05 20:05:11.495198] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.276 [2024-12-05 20:05:11.495234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.534 BaseBdev1_malloc 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.534 true 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.534 [2024-12-05 20:05:11.936116] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:10.534 [2024-12-05 20:05:11.936181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.534 [2024-12-05 20:05:11.936201] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:10.534 [2024-12-05 20:05:11.936212] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.534 [2024-12-05 20:05:11.938302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.534 [2024-12-05 20:05:11.938341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:10.534 BaseBdev1 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.534 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.793 BaseBdev2_malloc 00:12:10.793 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.793 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:10.793 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.793 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.793 true 00:12:10.793 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.793 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:10.793 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.793 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.793 [2024-12-05 20:05:12.001817] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:10.793 [2024-12-05 20:05:12.001879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.793 [2024-12-05 20:05:12.001928] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:10.793 [2024-12-05 20:05:12.001942] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.793 [2024-12-05 20:05:12.004209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.793 [2024-12-05 20:05:12.004252] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:10.793 BaseBdev2 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.793 BaseBdev3_malloc 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.793 true 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.793 [2024-12-05 20:05:12.080631] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:10.793 [2024-12-05 20:05:12.080689] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.793 [2024-12-05 20:05:12.080709] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:10.793 [2024-12-05 20:05:12.080720] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.793 [2024-12-05 20:05:12.082890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.793 [2024-12-05 20:05:12.082942] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:10.793 BaseBdev3 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.793 [2024-12-05 20:05:12.092670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.793 [2024-12-05 20:05:12.094501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.793 [2024-12-05 20:05:12.094583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.793 [2024-12-05 20:05:12.094799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:10.793 [2024-12-05 20:05:12.094822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:10.793 [2024-12-05 20:05:12.095094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:10.793 [2024-12-05 20:05:12.095278] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:10.793 [2024-12-05 20:05:12.095293] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:10.793 [2024-12-05 20:05:12.095449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.793 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.794 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.794 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.794 20:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.794 20:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.794 20:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.794 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.794 "name": "raid_bdev1", 00:12:10.794 "uuid": "eece5e8e-796d-4466-bcda-30a32b933872", 00:12:10.794 "strip_size_kb": 0, 00:12:10.794 "state": "online", 00:12:10.794 "raid_level": "raid1", 00:12:10.794 "superblock": true, 00:12:10.794 "num_base_bdevs": 3, 00:12:10.794 "num_base_bdevs_discovered": 3, 00:12:10.794 "num_base_bdevs_operational": 3, 00:12:10.794 "base_bdevs_list": [ 00:12:10.794 { 00:12:10.794 "name": "BaseBdev1", 00:12:10.794 "uuid": "07970f31-8c55-5a65-92b6-3e5a526d2a20", 00:12:10.794 "is_configured": true, 00:12:10.794 "data_offset": 2048, 00:12:10.794 "data_size": 63488 00:12:10.794 }, 00:12:10.794 { 00:12:10.794 "name": "BaseBdev2", 00:12:10.794 "uuid": "8c55e349-793a-57e1-b5cb-df8acf7517fd", 00:12:10.794 "is_configured": true, 00:12:10.794 "data_offset": 2048, 00:12:10.794 "data_size": 63488 00:12:10.794 }, 00:12:10.794 { 00:12:10.794 "name": "BaseBdev3", 00:12:10.794 "uuid": "70750ebf-f611-52df-bd84-fec3b8604f75", 00:12:10.794 "is_configured": true, 00:12:10.794 "data_offset": 2048, 00:12:10.794 "data_size": 63488 00:12:10.794 } 00:12:10.794 ] 00:12:10.794 }' 00:12:10.794 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.794 20:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.361 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:11.361 20:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:11.361 [2024-12-05 20:05:12.633147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.297 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.297 "name": "raid_bdev1", 00:12:12.297 "uuid": "eece5e8e-796d-4466-bcda-30a32b933872", 00:12:12.297 "strip_size_kb": 0, 00:12:12.297 "state": "online", 00:12:12.297 "raid_level": "raid1", 00:12:12.297 "superblock": true, 00:12:12.297 "num_base_bdevs": 3, 00:12:12.297 "num_base_bdevs_discovered": 3, 00:12:12.297 "num_base_bdevs_operational": 3, 00:12:12.298 "base_bdevs_list": [ 00:12:12.298 { 00:12:12.298 "name": "BaseBdev1", 00:12:12.298 "uuid": "07970f31-8c55-5a65-92b6-3e5a526d2a20", 00:12:12.298 "is_configured": true, 00:12:12.298 "data_offset": 2048, 00:12:12.298 "data_size": 63488 00:12:12.298 }, 00:12:12.298 { 00:12:12.298 "name": "BaseBdev2", 00:12:12.298 "uuid": "8c55e349-793a-57e1-b5cb-df8acf7517fd", 00:12:12.298 "is_configured": true, 00:12:12.298 "data_offset": 2048, 00:12:12.298 "data_size": 63488 00:12:12.298 }, 00:12:12.298 { 00:12:12.298 "name": "BaseBdev3", 00:12:12.298 "uuid": "70750ebf-f611-52df-bd84-fec3b8604f75", 00:12:12.298 "is_configured": true, 00:12:12.298 "data_offset": 2048, 00:12:12.298 "data_size": 63488 00:12:12.298 } 00:12:12.298 ] 00:12:12.298 }' 00:12:12.298 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.298 20:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.868 20:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:12.868 20:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.868 20:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.868 [2024-12-05 20:05:14.040780] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:12.868 [2024-12-05 20:05:14.040835] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.868 [2024-12-05 20:05:14.043773] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.868 [2024-12-05 20:05:14.043824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.868 [2024-12-05 20:05:14.043936] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.868 [2024-12-05 20:05:14.043948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:12.868 { 00:12:12.868 "results": [ 00:12:12.868 { 00:12:12.868 "job": "raid_bdev1", 00:12:12.868 "core_mask": "0x1", 00:12:12.868 "workload": "randrw", 00:12:12.868 "percentage": 50, 00:12:12.868 "status": "finished", 00:12:12.868 "queue_depth": 1, 00:12:12.868 "io_size": 131072, 00:12:12.868 "runtime": 1.408729, 00:12:12.868 "iops": 13039.413542278182, 00:12:12.868 "mibps": 1629.9266927847727, 00:12:12.868 "io_failed": 0, 00:12:12.868 "io_timeout": 0, 00:12:12.868 "avg_latency_us": 73.96976898377059, 00:12:12.868 "min_latency_us": 24.034934497816593, 00:12:12.868 "max_latency_us": 1445.2262008733624 00:12:12.868 } 00:12:12.868 ], 00:12:12.868 "core_count": 1 00:12:12.868 } 00:12:12.868 20:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.868 20:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69213 00:12:12.868 20:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69213 ']' 00:12:12.868 20:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69213 00:12:12.868 20:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:12.868 20:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.868 20:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69213 00:12:12.868 20:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.868 20:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.868 killing process with pid 69213 00:12:12.869 20:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69213' 00:12:12.869 20:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69213 00:12:12.869 [2024-12-05 20:05:14.087254] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:12.869 20:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69213 00:12:13.126 [2024-12-05 20:05:14.325239] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.499 20:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QKR8SO6w9j 00:12:14.499 20:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:14.499 20:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:14.499 20:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:14.499 20:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:14.499 20:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:14.499 20:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:14.499 20:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:14.499 00:12:14.499 real 0m4.653s 00:12:14.499 user 0m5.557s 00:12:14.499 sys 0m0.572s 00:12:14.499 20:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.499 20:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.499 ************************************ 00:12:14.499 END TEST raid_read_error_test 00:12:14.499 ************************************ 00:12:14.499 20:05:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:12:14.499 20:05:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:14.499 20:05:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.499 20:05:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.499 ************************************ 00:12:14.499 START TEST raid_write_error_test 00:12:14.499 ************************************ 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dXwDt2Uqd5 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69361 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69361 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69361 ']' 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.499 20:05:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.499 [2024-12-05 20:05:15.738108] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:12:14.499 [2024-12-05 20:05:15.738236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69361 ] 00:12:14.499 [2024-12-05 20:05:15.911487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.757 [2024-12-05 20:05:16.036817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.016 [2024-12-05 20:05:16.244986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.016 [2024-12-05 20:05:16.245056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.275 BaseBdev1_malloc 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.275 true 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.275 [2024-12-05 20:05:16.681787] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:15.275 [2024-12-05 20:05:16.681844] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.275 [2024-12-05 20:05:16.681864] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:15.275 [2024-12-05 20:05:16.681876] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.275 [2024-12-05 20:05:16.684107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.275 [2024-12-05 20:05:16.684149] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:15.275 BaseBdev1 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.275 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.533 BaseBdev2_malloc 00:12:15.533 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.533 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:15.533 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.533 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.533 true 00:12:15.533 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.533 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:15.533 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.533 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.533 [2024-12-05 20:05:16.750174] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:15.533 [2024-12-05 20:05:16.750226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.533 [2024-12-05 20:05:16.750241] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:15.533 [2024-12-05 20:05:16.750252] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.533 [2024-12-05 20:05:16.752350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.533 [2024-12-05 20:05:16.752389] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:15.533 BaseBdev2 00:12:15.533 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.533 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:15.533 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:15.533 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.533 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.533 BaseBdev3_malloc 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.534 true 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.534 [2024-12-05 20:05:16.834497] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:15.534 [2024-12-05 20:05:16.834551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.534 [2024-12-05 20:05:16.834569] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:15.534 [2024-12-05 20:05:16.834581] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.534 [2024-12-05 20:05:16.836786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.534 [2024-12-05 20:05:16.836828] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:15.534 BaseBdev3 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.534 [2024-12-05 20:05:16.846557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.534 [2024-12-05 20:05:16.848476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.534 [2024-12-05 20:05:16.848561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:15.534 [2024-12-05 20:05:16.848785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:15.534 [2024-12-05 20:05:16.848798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:15.534 [2024-12-05 20:05:16.849069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:15.534 [2024-12-05 20:05:16.849262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:15.534 [2024-12-05 20:05:16.849281] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:15.534 [2024-12-05 20:05:16.849447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.534 "name": "raid_bdev1", 00:12:15.534 "uuid": "417a100c-2c8d-417e-9ee5-90b12f07064a", 00:12:15.534 "strip_size_kb": 0, 00:12:15.534 "state": "online", 00:12:15.534 "raid_level": "raid1", 00:12:15.534 "superblock": true, 00:12:15.534 "num_base_bdevs": 3, 00:12:15.534 "num_base_bdevs_discovered": 3, 00:12:15.534 "num_base_bdevs_operational": 3, 00:12:15.534 "base_bdevs_list": [ 00:12:15.534 { 00:12:15.534 "name": "BaseBdev1", 00:12:15.534 "uuid": "973d0262-fff8-5b8e-a9b4-193bd9c505b1", 00:12:15.534 "is_configured": true, 00:12:15.534 "data_offset": 2048, 00:12:15.534 "data_size": 63488 00:12:15.534 }, 00:12:15.534 { 00:12:15.534 "name": "BaseBdev2", 00:12:15.534 "uuid": "132d2bd0-aed6-5aed-a637-f3e9374e3175", 00:12:15.534 "is_configured": true, 00:12:15.534 "data_offset": 2048, 00:12:15.534 "data_size": 63488 00:12:15.534 }, 00:12:15.534 { 00:12:15.534 "name": "BaseBdev3", 00:12:15.534 "uuid": "3b25dc35-a7fa-55b6-83a6-a10d224c342c", 00:12:15.534 "is_configured": true, 00:12:15.534 "data_offset": 2048, 00:12:15.534 "data_size": 63488 00:12:15.534 } 00:12:15.534 ] 00:12:15.534 }' 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.534 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.103 20:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:16.103 20:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:16.103 [2024-12-05 20:05:17.390921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:17.166 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:17.166 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.166 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.166 [2024-12-05 20:05:18.306897] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:17.166 [2024-12-05 20:05:18.307059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:17.166 [2024-12-05 20:05:18.307326] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:12:17.166 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.166 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:17.166 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.167 "name": "raid_bdev1", 00:12:17.167 "uuid": "417a100c-2c8d-417e-9ee5-90b12f07064a", 00:12:17.167 "strip_size_kb": 0, 00:12:17.167 "state": "online", 00:12:17.167 "raid_level": "raid1", 00:12:17.167 "superblock": true, 00:12:17.167 "num_base_bdevs": 3, 00:12:17.167 "num_base_bdevs_discovered": 2, 00:12:17.167 "num_base_bdevs_operational": 2, 00:12:17.167 "base_bdevs_list": [ 00:12:17.167 { 00:12:17.167 "name": null, 00:12:17.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.167 "is_configured": false, 00:12:17.167 "data_offset": 0, 00:12:17.167 "data_size": 63488 00:12:17.167 }, 00:12:17.167 { 00:12:17.167 "name": "BaseBdev2", 00:12:17.167 "uuid": "132d2bd0-aed6-5aed-a637-f3e9374e3175", 00:12:17.167 "is_configured": true, 00:12:17.167 "data_offset": 2048, 00:12:17.167 "data_size": 63488 00:12:17.167 }, 00:12:17.167 { 00:12:17.167 "name": "BaseBdev3", 00:12:17.167 "uuid": "3b25dc35-a7fa-55b6-83a6-a10d224c342c", 00:12:17.167 "is_configured": true, 00:12:17.167 "data_offset": 2048, 00:12:17.167 "data_size": 63488 00:12:17.167 } 00:12:17.167 ] 00:12:17.167 }' 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.167 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.429 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:17.429 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.429 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.429 [2024-12-05 20:05:18.717492] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:17.429 [2024-12-05 20:05:18.717605] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.429 [2024-12-05 20:05:18.720689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.429 [2024-12-05 20:05:18.720810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.429 [2024-12-05 20:05:18.720951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.429 [2024-12-05 20:05:18.721016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:17.429 { 00:12:17.429 "results": [ 00:12:17.429 { 00:12:17.429 "job": "raid_bdev1", 00:12:17.429 "core_mask": "0x1", 00:12:17.429 "workload": "randrw", 00:12:17.429 "percentage": 50, 00:12:17.429 "status": "finished", 00:12:17.429 "queue_depth": 1, 00:12:17.429 "io_size": 131072, 00:12:17.429 "runtime": 1.327353, 00:12:17.429 "iops": 13612.806841887576, 00:12:17.429 "mibps": 1701.600855235947, 00:12:17.429 "io_failed": 0, 00:12:17.429 "io_timeout": 0, 00:12:17.429 "avg_latency_us": 70.5735399068249, 00:12:17.429 "min_latency_us": 25.041048034934498, 00:12:17.429 "max_latency_us": 1473.844541484716 00:12:17.429 } 00:12:17.429 ], 00:12:17.429 "core_count": 1 00:12:17.429 } 00:12:17.429 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.429 20:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69361 00:12:17.429 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69361 ']' 00:12:17.429 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69361 00:12:17.429 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:17.429 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.429 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69361 00:12:17.429 killing process with pid 69361 00:12:17.429 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.429 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.429 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69361' 00:12:17.429 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69361 00:12:17.429 [2024-12-05 20:05:18.757716] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:17.429 20:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69361 00:12:17.689 [2024-12-05 20:05:18.992346] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.071 20:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:19.071 20:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:19.071 20:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dXwDt2Uqd5 00:12:19.071 20:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:19.071 20:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:19.071 20:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:19.071 20:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:19.071 20:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:19.071 00:12:19.071 real 0m4.567s 00:12:19.071 user 0m5.431s 00:12:19.071 sys 0m0.560s 00:12:19.071 20:05:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.071 20:05:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.071 ************************************ 00:12:19.071 END TEST raid_write_error_test 00:12:19.071 ************************************ 00:12:19.071 20:05:20 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:19.071 20:05:20 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:19.071 20:05:20 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:12:19.071 20:05:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:19.071 20:05:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.071 20:05:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.072 ************************************ 00:12:19.072 START TEST raid_state_function_test 00:12:19.072 ************************************ 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:19.072 Process raid pid: 69509 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69509 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69509' 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69509 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69509 ']' 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.072 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.072 [2024-12-05 20:05:20.361919] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:12:19.072 [2024-12-05 20:05:20.362117] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.332 [2024-12-05 20:05:20.539006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.332 [2024-12-05 20:05:20.654068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.590 [2024-12-05 20:05:20.863476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.590 [2024-12-05 20:05:20.863604] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.849 [2024-12-05 20:05:21.223568] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:19.849 [2024-12-05 20:05:21.223630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:19.849 [2024-12-05 20:05:21.223641] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:19.849 [2024-12-05 20:05:21.223651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:19.849 [2024-12-05 20:05:21.223657] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:19.849 [2024-12-05 20:05:21.223666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:19.849 [2024-12-05 20:05:21.223672] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:19.849 [2024-12-05 20:05:21.223681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.849 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.849 "name": "Existed_Raid", 00:12:19.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.849 "strip_size_kb": 64, 00:12:19.849 "state": "configuring", 00:12:19.849 "raid_level": "raid0", 00:12:19.849 "superblock": false, 00:12:19.849 "num_base_bdevs": 4, 00:12:19.849 "num_base_bdevs_discovered": 0, 00:12:19.849 "num_base_bdevs_operational": 4, 00:12:19.849 "base_bdevs_list": [ 00:12:19.849 { 00:12:19.849 "name": "BaseBdev1", 00:12:19.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.849 "is_configured": false, 00:12:19.849 "data_offset": 0, 00:12:19.849 "data_size": 0 00:12:19.849 }, 00:12:19.849 { 00:12:19.849 "name": "BaseBdev2", 00:12:19.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.849 "is_configured": false, 00:12:19.849 "data_offset": 0, 00:12:19.849 "data_size": 0 00:12:19.849 }, 00:12:19.849 { 00:12:19.849 "name": "BaseBdev3", 00:12:19.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.849 "is_configured": false, 00:12:19.849 "data_offset": 0, 00:12:19.849 "data_size": 0 00:12:19.849 }, 00:12:19.849 { 00:12:19.849 "name": "BaseBdev4", 00:12:19.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.849 "is_configured": false, 00:12:19.849 "data_offset": 0, 00:12:19.850 "data_size": 0 00:12:19.850 } 00:12:19.850 ] 00:12:19.850 }' 00:12:19.850 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.850 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.418 [2024-12-05 20:05:21.654765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:20.418 [2024-12-05 20:05:21.654854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.418 [2024-12-05 20:05:21.666742] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.418 [2024-12-05 20:05:21.666821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.418 [2024-12-05 20:05:21.666851] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:20.418 [2024-12-05 20:05:21.666875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:20.418 [2024-12-05 20:05:21.666910] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:20.418 [2024-12-05 20:05:21.666934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:20.418 [2024-12-05 20:05:21.666953] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:20.418 [2024-12-05 20:05:21.666976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.418 [2024-12-05 20:05:21.713818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.418 BaseBdev1 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.418 [ 00:12:20.418 { 00:12:20.418 "name": "BaseBdev1", 00:12:20.418 "aliases": [ 00:12:20.418 "d385a25e-3f12-46b5-98d1-7837f19e1fb2" 00:12:20.418 ], 00:12:20.418 "product_name": "Malloc disk", 00:12:20.418 "block_size": 512, 00:12:20.418 "num_blocks": 65536, 00:12:20.418 "uuid": "d385a25e-3f12-46b5-98d1-7837f19e1fb2", 00:12:20.418 "assigned_rate_limits": { 00:12:20.418 "rw_ios_per_sec": 0, 00:12:20.418 "rw_mbytes_per_sec": 0, 00:12:20.418 "r_mbytes_per_sec": 0, 00:12:20.418 "w_mbytes_per_sec": 0 00:12:20.418 }, 00:12:20.418 "claimed": true, 00:12:20.418 "claim_type": "exclusive_write", 00:12:20.418 "zoned": false, 00:12:20.418 "supported_io_types": { 00:12:20.418 "read": true, 00:12:20.418 "write": true, 00:12:20.418 "unmap": true, 00:12:20.418 "flush": true, 00:12:20.418 "reset": true, 00:12:20.418 "nvme_admin": false, 00:12:20.418 "nvme_io": false, 00:12:20.418 "nvme_io_md": false, 00:12:20.418 "write_zeroes": true, 00:12:20.418 "zcopy": true, 00:12:20.418 "get_zone_info": false, 00:12:20.418 "zone_management": false, 00:12:20.418 "zone_append": false, 00:12:20.418 "compare": false, 00:12:20.418 "compare_and_write": false, 00:12:20.418 "abort": true, 00:12:20.418 "seek_hole": false, 00:12:20.418 "seek_data": false, 00:12:20.418 "copy": true, 00:12:20.418 "nvme_iov_md": false 00:12:20.418 }, 00:12:20.418 "memory_domains": [ 00:12:20.418 { 00:12:20.418 "dma_device_id": "system", 00:12:20.418 "dma_device_type": 1 00:12:20.418 }, 00:12:20.418 { 00:12:20.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.418 "dma_device_type": 2 00:12:20.418 } 00:12:20.418 ], 00:12:20.418 "driver_specific": {} 00:12:20.418 } 00:12:20.418 ] 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.418 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.418 "name": "Existed_Raid", 00:12:20.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.418 "strip_size_kb": 64, 00:12:20.418 "state": "configuring", 00:12:20.418 "raid_level": "raid0", 00:12:20.418 "superblock": false, 00:12:20.418 "num_base_bdevs": 4, 00:12:20.419 "num_base_bdevs_discovered": 1, 00:12:20.419 "num_base_bdevs_operational": 4, 00:12:20.419 "base_bdevs_list": [ 00:12:20.419 { 00:12:20.419 "name": "BaseBdev1", 00:12:20.419 "uuid": "d385a25e-3f12-46b5-98d1-7837f19e1fb2", 00:12:20.419 "is_configured": true, 00:12:20.419 "data_offset": 0, 00:12:20.419 "data_size": 65536 00:12:20.419 }, 00:12:20.419 { 00:12:20.419 "name": "BaseBdev2", 00:12:20.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.419 "is_configured": false, 00:12:20.419 "data_offset": 0, 00:12:20.419 "data_size": 0 00:12:20.419 }, 00:12:20.419 { 00:12:20.419 "name": "BaseBdev3", 00:12:20.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.419 "is_configured": false, 00:12:20.419 "data_offset": 0, 00:12:20.419 "data_size": 0 00:12:20.419 }, 00:12:20.419 { 00:12:20.419 "name": "BaseBdev4", 00:12:20.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.419 "is_configured": false, 00:12:20.419 "data_offset": 0, 00:12:20.419 "data_size": 0 00:12:20.419 } 00:12:20.419 ] 00:12:20.419 }' 00:12:20.419 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.419 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.987 [2024-12-05 20:05:22.205049] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:20.987 [2024-12-05 20:05:22.205106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.987 [2024-12-05 20:05:22.213095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.987 [2024-12-05 20:05:22.215013] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:20.987 [2024-12-05 20:05:22.215148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:20.987 [2024-12-05 20:05:22.215166] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:20.987 [2024-12-05 20:05:22.215180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:20.987 [2024-12-05 20:05:22.215188] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:20.987 [2024-12-05 20:05:22.215199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.987 "name": "Existed_Raid", 00:12:20.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.987 "strip_size_kb": 64, 00:12:20.987 "state": "configuring", 00:12:20.987 "raid_level": "raid0", 00:12:20.987 "superblock": false, 00:12:20.987 "num_base_bdevs": 4, 00:12:20.987 "num_base_bdevs_discovered": 1, 00:12:20.987 "num_base_bdevs_operational": 4, 00:12:20.987 "base_bdevs_list": [ 00:12:20.987 { 00:12:20.987 "name": "BaseBdev1", 00:12:20.987 "uuid": "d385a25e-3f12-46b5-98d1-7837f19e1fb2", 00:12:20.987 "is_configured": true, 00:12:20.987 "data_offset": 0, 00:12:20.987 "data_size": 65536 00:12:20.987 }, 00:12:20.987 { 00:12:20.987 "name": "BaseBdev2", 00:12:20.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.987 "is_configured": false, 00:12:20.987 "data_offset": 0, 00:12:20.987 "data_size": 0 00:12:20.987 }, 00:12:20.987 { 00:12:20.987 "name": "BaseBdev3", 00:12:20.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.987 "is_configured": false, 00:12:20.987 "data_offset": 0, 00:12:20.987 "data_size": 0 00:12:20.987 }, 00:12:20.987 { 00:12:20.987 "name": "BaseBdev4", 00:12:20.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.987 "is_configured": false, 00:12:20.987 "data_offset": 0, 00:12:20.987 "data_size": 0 00:12:20.987 } 00:12:20.987 ] 00:12:20.987 }' 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.987 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.247 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:21.247 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.247 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.505 [2024-12-05 20:05:22.724659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.505 BaseBdev2 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.505 [ 00:12:21.505 { 00:12:21.505 "name": "BaseBdev2", 00:12:21.505 "aliases": [ 00:12:21.505 "4efb20b6-6f2b-4aa2-81dc-62d51ad344dc" 00:12:21.505 ], 00:12:21.505 "product_name": "Malloc disk", 00:12:21.505 "block_size": 512, 00:12:21.505 "num_blocks": 65536, 00:12:21.505 "uuid": "4efb20b6-6f2b-4aa2-81dc-62d51ad344dc", 00:12:21.505 "assigned_rate_limits": { 00:12:21.505 "rw_ios_per_sec": 0, 00:12:21.505 "rw_mbytes_per_sec": 0, 00:12:21.505 "r_mbytes_per_sec": 0, 00:12:21.505 "w_mbytes_per_sec": 0 00:12:21.505 }, 00:12:21.505 "claimed": true, 00:12:21.505 "claim_type": "exclusive_write", 00:12:21.505 "zoned": false, 00:12:21.505 "supported_io_types": { 00:12:21.505 "read": true, 00:12:21.505 "write": true, 00:12:21.505 "unmap": true, 00:12:21.505 "flush": true, 00:12:21.505 "reset": true, 00:12:21.505 "nvme_admin": false, 00:12:21.505 "nvme_io": false, 00:12:21.505 "nvme_io_md": false, 00:12:21.505 "write_zeroes": true, 00:12:21.505 "zcopy": true, 00:12:21.505 "get_zone_info": false, 00:12:21.505 "zone_management": false, 00:12:21.505 "zone_append": false, 00:12:21.505 "compare": false, 00:12:21.505 "compare_and_write": false, 00:12:21.505 "abort": true, 00:12:21.505 "seek_hole": false, 00:12:21.505 "seek_data": false, 00:12:21.505 "copy": true, 00:12:21.505 "nvme_iov_md": false 00:12:21.505 }, 00:12:21.505 "memory_domains": [ 00:12:21.505 { 00:12:21.505 "dma_device_id": "system", 00:12:21.505 "dma_device_type": 1 00:12:21.505 }, 00:12:21.505 { 00:12:21.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.505 "dma_device_type": 2 00:12:21.505 } 00:12:21.505 ], 00:12:21.505 "driver_specific": {} 00:12:21.505 } 00:12:21.505 ] 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.505 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.505 "name": "Existed_Raid", 00:12:21.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.505 "strip_size_kb": 64, 00:12:21.505 "state": "configuring", 00:12:21.505 "raid_level": "raid0", 00:12:21.505 "superblock": false, 00:12:21.505 "num_base_bdevs": 4, 00:12:21.505 "num_base_bdevs_discovered": 2, 00:12:21.505 "num_base_bdevs_operational": 4, 00:12:21.505 "base_bdevs_list": [ 00:12:21.505 { 00:12:21.505 "name": "BaseBdev1", 00:12:21.505 "uuid": "d385a25e-3f12-46b5-98d1-7837f19e1fb2", 00:12:21.505 "is_configured": true, 00:12:21.505 "data_offset": 0, 00:12:21.505 "data_size": 65536 00:12:21.505 }, 00:12:21.505 { 00:12:21.505 "name": "BaseBdev2", 00:12:21.505 "uuid": "4efb20b6-6f2b-4aa2-81dc-62d51ad344dc", 00:12:21.505 "is_configured": true, 00:12:21.505 "data_offset": 0, 00:12:21.505 "data_size": 65536 00:12:21.505 }, 00:12:21.505 { 00:12:21.505 "name": "BaseBdev3", 00:12:21.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.505 "is_configured": false, 00:12:21.505 "data_offset": 0, 00:12:21.505 "data_size": 0 00:12:21.505 }, 00:12:21.505 { 00:12:21.505 "name": "BaseBdev4", 00:12:21.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.506 "is_configured": false, 00:12:21.506 "data_offset": 0, 00:12:21.506 "data_size": 0 00:12:21.506 } 00:12:21.506 ] 00:12:21.506 }' 00:12:21.506 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.506 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.764 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:21.764 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.764 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.022 [2024-12-05 20:05:23.249518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:22.022 BaseBdev3 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.022 [ 00:12:22.022 { 00:12:22.022 "name": "BaseBdev3", 00:12:22.022 "aliases": [ 00:12:22.022 "8be58a6a-441a-4b51-ad23-28eaa8beed70" 00:12:22.022 ], 00:12:22.022 "product_name": "Malloc disk", 00:12:22.022 "block_size": 512, 00:12:22.022 "num_blocks": 65536, 00:12:22.022 "uuid": "8be58a6a-441a-4b51-ad23-28eaa8beed70", 00:12:22.022 "assigned_rate_limits": { 00:12:22.022 "rw_ios_per_sec": 0, 00:12:22.022 "rw_mbytes_per_sec": 0, 00:12:22.022 "r_mbytes_per_sec": 0, 00:12:22.022 "w_mbytes_per_sec": 0 00:12:22.022 }, 00:12:22.022 "claimed": true, 00:12:22.022 "claim_type": "exclusive_write", 00:12:22.022 "zoned": false, 00:12:22.022 "supported_io_types": { 00:12:22.022 "read": true, 00:12:22.022 "write": true, 00:12:22.022 "unmap": true, 00:12:22.022 "flush": true, 00:12:22.022 "reset": true, 00:12:22.022 "nvme_admin": false, 00:12:22.022 "nvme_io": false, 00:12:22.022 "nvme_io_md": false, 00:12:22.022 "write_zeroes": true, 00:12:22.022 "zcopy": true, 00:12:22.022 "get_zone_info": false, 00:12:22.022 "zone_management": false, 00:12:22.022 "zone_append": false, 00:12:22.022 "compare": false, 00:12:22.022 "compare_and_write": false, 00:12:22.022 "abort": true, 00:12:22.022 "seek_hole": false, 00:12:22.022 "seek_data": false, 00:12:22.022 "copy": true, 00:12:22.022 "nvme_iov_md": false 00:12:22.022 }, 00:12:22.022 "memory_domains": [ 00:12:22.022 { 00:12:22.022 "dma_device_id": "system", 00:12:22.022 "dma_device_type": 1 00:12:22.022 }, 00:12:22.022 { 00:12:22.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.022 "dma_device_type": 2 00:12:22.022 } 00:12:22.022 ], 00:12:22.022 "driver_specific": {} 00:12:22.022 } 00:12:22.022 ] 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.022 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.022 "name": "Existed_Raid", 00:12:22.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.022 "strip_size_kb": 64, 00:12:22.022 "state": "configuring", 00:12:22.022 "raid_level": "raid0", 00:12:22.022 "superblock": false, 00:12:22.022 "num_base_bdevs": 4, 00:12:22.022 "num_base_bdevs_discovered": 3, 00:12:22.022 "num_base_bdevs_operational": 4, 00:12:22.022 "base_bdevs_list": [ 00:12:22.022 { 00:12:22.022 "name": "BaseBdev1", 00:12:22.022 "uuid": "d385a25e-3f12-46b5-98d1-7837f19e1fb2", 00:12:22.022 "is_configured": true, 00:12:22.022 "data_offset": 0, 00:12:22.022 "data_size": 65536 00:12:22.022 }, 00:12:22.022 { 00:12:22.022 "name": "BaseBdev2", 00:12:22.022 "uuid": "4efb20b6-6f2b-4aa2-81dc-62d51ad344dc", 00:12:22.022 "is_configured": true, 00:12:22.022 "data_offset": 0, 00:12:22.022 "data_size": 65536 00:12:22.022 }, 00:12:22.022 { 00:12:22.022 "name": "BaseBdev3", 00:12:22.022 "uuid": "8be58a6a-441a-4b51-ad23-28eaa8beed70", 00:12:22.022 "is_configured": true, 00:12:22.022 "data_offset": 0, 00:12:22.022 "data_size": 65536 00:12:22.022 }, 00:12:22.023 { 00:12:22.023 "name": "BaseBdev4", 00:12:22.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.023 "is_configured": false, 00:12:22.023 "data_offset": 0, 00:12:22.023 "data_size": 0 00:12:22.023 } 00:12:22.023 ] 00:12:22.023 }' 00:12:22.023 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.023 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.589 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:22.589 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.589 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.589 [2024-12-05 20:05:23.795714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:22.590 [2024-12-05 20:05:23.795853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:22.590 [2024-12-05 20:05:23.795882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:22.590 [2024-12-05 20:05:23.796267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:22.590 [2024-12-05 20:05:23.796496] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:22.590 [2024-12-05 20:05:23.796545] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:22.590 [2024-12-05 20:05:23.796879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.590 BaseBdev4 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.590 [ 00:12:22.590 { 00:12:22.590 "name": "BaseBdev4", 00:12:22.590 "aliases": [ 00:12:22.590 "f13826b8-b923-4bd7-aecf-9c44e7d75343" 00:12:22.590 ], 00:12:22.590 "product_name": "Malloc disk", 00:12:22.590 "block_size": 512, 00:12:22.590 "num_blocks": 65536, 00:12:22.590 "uuid": "f13826b8-b923-4bd7-aecf-9c44e7d75343", 00:12:22.590 "assigned_rate_limits": { 00:12:22.590 "rw_ios_per_sec": 0, 00:12:22.590 "rw_mbytes_per_sec": 0, 00:12:22.590 "r_mbytes_per_sec": 0, 00:12:22.590 "w_mbytes_per_sec": 0 00:12:22.590 }, 00:12:22.590 "claimed": true, 00:12:22.590 "claim_type": "exclusive_write", 00:12:22.590 "zoned": false, 00:12:22.590 "supported_io_types": { 00:12:22.590 "read": true, 00:12:22.590 "write": true, 00:12:22.590 "unmap": true, 00:12:22.590 "flush": true, 00:12:22.590 "reset": true, 00:12:22.590 "nvme_admin": false, 00:12:22.590 "nvme_io": false, 00:12:22.590 "nvme_io_md": false, 00:12:22.590 "write_zeroes": true, 00:12:22.590 "zcopy": true, 00:12:22.590 "get_zone_info": false, 00:12:22.590 "zone_management": false, 00:12:22.590 "zone_append": false, 00:12:22.590 "compare": false, 00:12:22.590 "compare_and_write": false, 00:12:22.590 "abort": true, 00:12:22.590 "seek_hole": false, 00:12:22.590 "seek_data": false, 00:12:22.590 "copy": true, 00:12:22.590 "nvme_iov_md": false 00:12:22.590 }, 00:12:22.590 "memory_domains": [ 00:12:22.590 { 00:12:22.590 "dma_device_id": "system", 00:12:22.590 "dma_device_type": 1 00:12:22.590 }, 00:12:22.590 { 00:12:22.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.590 "dma_device_type": 2 00:12:22.590 } 00:12:22.590 ], 00:12:22.590 "driver_specific": {} 00:12:22.590 } 00:12:22.590 ] 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.590 "name": "Existed_Raid", 00:12:22.590 "uuid": "0cff78e7-b51d-45e0-bc13-d025e27b09e0", 00:12:22.590 "strip_size_kb": 64, 00:12:22.590 "state": "online", 00:12:22.590 "raid_level": "raid0", 00:12:22.590 "superblock": false, 00:12:22.590 "num_base_bdevs": 4, 00:12:22.590 "num_base_bdevs_discovered": 4, 00:12:22.590 "num_base_bdevs_operational": 4, 00:12:22.590 "base_bdevs_list": [ 00:12:22.590 { 00:12:22.590 "name": "BaseBdev1", 00:12:22.590 "uuid": "d385a25e-3f12-46b5-98d1-7837f19e1fb2", 00:12:22.590 "is_configured": true, 00:12:22.590 "data_offset": 0, 00:12:22.590 "data_size": 65536 00:12:22.590 }, 00:12:22.590 { 00:12:22.590 "name": "BaseBdev2", 00:12:22.590 "uuid": "4efb20b6-6f2b-4aa2-81dc-62d51ad344dc", 00:12:22.590 "is_configured": true, 00:12:22.590 "data_offset": 0, 00:12:22.590 "data_size": 65536 00:12:22.590 }, 00:12:22.590 { 00:12:22.590 "name": "BaseBdev3", 00:12:22.590 "uuid": "8be58a6a-441a-4b51-ad23-28eaa8beed70", 00:12:22.590 "is_configured": true, 00:12:22.590 "data_offset": 0, 00:12:22.590 "data_size": 65536 00:12:22.590 }, 00:12:22.590 { 00:12:22.590 "name": "BaseBdev4", 00:12:22.590 "uuid": "f13826b8-b923-4bd7-aecf-9c44e7d75343", 00:12:22.590 "is_configured": true, 00:12:22.590 "data_offset": 0, 00:12:22.590 "data_size": 65536 00:12:22.590 } 00:12:22.590 ] 00:12:22.590 }' 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.590 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.847 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:22.847 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:22.847 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:22.847 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:22.847 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:22.847 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:22.847 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:22.847 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:22.847 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.847 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.847 [2024-12-05 20:05:24.279361] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.105 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.105 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:23.105 "name": "Existed_Raid", 00:12:23.105 "aliases": [ 00:12:23.105 "0cff78e7-b51d-45e0-bc13-d025e27b09e0" 00:12:23.105 ], 00:12:23.105 "product_name": "Raid Volume", 00:12:23.105 "block_size": 512, 00:12:23.105 "num_blocks": 262144, 00:12:23.105 "uuid": "0cff78e7-b51d-45e0-bc13-d025e27b09e0", 00:12:23.105 "assigned_rate_limits": { 00:12:23.105 "rw_ios_per_sec": 0, 00:12:23.105 "rw_mbytes_per_sec": 0, 00:12:23.105 "r_mbytes_per_sec": 0, 00:12:23.105 "w_mbytes_per_sec": 0 00:12:23.105 }, 00:12:23.105 "claimed": false, 00:12:23.105 "zoned": false, 00:12:23.105 "supported_io_types": { 00:12:23.105 "read": true, 00:12:23.105 "write": true, 00:12:23.105 "unmap": true, 00:12:23.105 "flush": true, 00:12:23.105 "reset": true, 00:12:23.105 "nvme_admin": false, 00:12:23.105 "nvme_io": false, 00:12:23.105 "nvme_io_md": false, 00:12:23.105 "write_zeroes": true, 00:12:23.105 "zcopy": false, 00:12:23.105 "get_zone_info": false, 00:12:23.105 "zone_management": false, 00:12:23.105 "zone_append": false, 00:12:23.105 "compare": false, 00:12:23.105 "compare_and_write": false, 00:12:23.105 "abort": false, 00:12:23.105 "seek_hole": false, 00:12:23.105 "seek_data": false, 00:12:23.105 "copy": false, 00:12:23.105 "nvme_iov_md": false 00:12:23.105 }, 00:12:23.105 "memory_domains": [ 00:12:23.105 { 00:12:23.105 "dma_device_id": "system", 00:12:23.105 "dma_device_type": 1 00:12:23.105 }, 00:12:23.105 { 00:12:23.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.105 "dma_device_type": 2 00:12:23.105 }, 00:12:23.105 { 00:12:23.105 "dma_device_id": "system", 00:12:23.105 "dma_device_type": 1 00:12:23.105 }, 00:12:23.105 { 00:12:23.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.106 "dma_device_type": 2 00:12:23.106 }, 00:12:23.106 { 00:12:23.106 "dma_device_id": "system", 00:12:23.106 "dma_device_type": 1 00:12:23.106 }, 00:12:23.106 { 00:12:23.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.106 "dma_device_type": 2 00:12:23.106 }, 00:12:23.106 { 00:12:23.106 "dma_device_id": "system", 00:12:23.106 "dma_device_type": 1 00:12:23.106 }, 00:12:23.106 { 00:12:23.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.106 "dma_device_type": 2 00:12:23.106 } 00:12:23.106 ], 00:12:23.106 "driver_specific": { 00:12:23.106 "raid": { 00:12:23.106 "uuid": "0cff78e7-b51d-45e0-bc13-d025e27b09e0", 00:12:23.106 "strip_size_kb": 64, 00:12:23.106 "state": "online", 00:12:23.106 "raid_level": "raid0", 00:12:23.106 "superblock": false, 00:12:23.106 "num_base_bdevs": 4, 00:12:23.106 "num_base_bdevs_discovered": 4, 00:12:23.106 "num_base_bdevs_operational": 4, 00:12:23.106 "base_bdevs_list": [ 00:12:23.106 { 00:12:23.106 "name": "BaseBdev1", 00:12:23.106 "uuid": "d385a25e-3f12-46b5-98d1-7837f19e1fb2", 00:12:23.106 "is_configured": true, 00:12:23.106 "data_offset": 0, 00:12:23.106 "data_size": 65536 00:12:23.106 }, 00:12:23.106 { 00:12:23.106 "name": "BaseBdev2", 00:12:23.106 "uuid": "4efb20b6-6f2b-4aa2-81dc-62d51ad344dc", 00:12:23.106 "is_configured": true, 00:12:23.106 "data_offset": 0, 00:12:23.106 "data_size": 65536 00:12:23.106 }, 00:12:23.106 { 00:12:23.106 "name": "BaseBdev3", 00:12:23.106 "uuid": "8be58a6a-441a-4b51-ad23-28eaa8beed70", 00:12:23.106 "is_configured": true, 00:12:23.106 "data_offset": 0, 00:12:23.106 "data_size": 65536 00:12:23.106 }, 00:12:23.106 { 00:12:23.106 "name": "BaseBdev4", 00:12:23.106 "uuid": "f13826b8-b923-4bd7-aecf-9c44e7d75343", 00:12:23.106 "is_configured": true, 00:12:23.106 "data_offset": 0, 00:12:23.106 "data_size": 65536 00:12:23.106 } 00:12:23.106 ] 00:12:23.106 } 00:12:23.106 } 00:12:23.106 }' 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:23.106 BaseBdev2 00:12:23.106 BaseBdev3 00:12:23.106 BaseBdev4' 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.106 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.365 [2024-12-05 20:05:24.586497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.365 [2024-12-05 20:05:24.586530] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.365 [2024-12-05 20:05:24.586584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.365 "name": "Existed_Raid", 00:12:23.365 "uuid": "0cff78e7-b51d-45e0-bc13-d025e27b09e0", 00:12:23.365 "strip_size_kb": 64, 00:12:23.365 "state": "offline", 00:12:23.365 "raid_level": "raid0", 00:12:23.365 "superblock": false, 00:12:23.365 "num_base_bdevs": 4, 00:12:23.365 "num_base_bdevs_discovered": 3, 00:12:23.365 "num_base_bdevs_operational": 3, 00:12:23.365 "base_bdevs_list": [ 00:12:23.365 { 00:12:23.365 "name": null, 00:12:23.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.365 "is_configured": false, 00:12:23.365 "data_offset": 0, 00:12:23.365 "data_size": 65536 00:12:23.365 }, 00:12:23.365 { 00:12:23.365 "name": "BaseBdev2", 00:12:23.365 "uuid": "4efb20b6-6f2b-4aa2-81dc-62d51ad344dc", 00:12:23.365 "is_configured": true, 00:12:23.365 "data_offset": 0, 00:12:23.365 "data_size": 65536 00:12:23.365 }, 00:12:23.365 { 00:12:23.365 "name": "BaseBdev3", 00:12:23.365 "uuid": "8be58a6a-441a-4b51-ad23-28eaa8beed70", 00:12:23.365 "is_configured": true, 00:12:23.365 "data_offset": 0, 00:12:23.365 "data_size": 65536 00:12:23.365 }, 00:12:23.365 { 00:12:23.365 "name": "BaseBdev4", 00:12:23.365 "uuid": "f13826b8-b923-4bd7-aecf-9c44e7d75343", 00:12:23.365 "is_configured": true, 00:12:23.365 "data_offset": 0, 00:12:23.365 "data_size": 65536 00:12:23.365 } 00:12:23.365 ] 00:12:23.365 }' 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.365 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.933 [2024-12-05 20:05:25.183377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.933 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.933 [2024-12-05 20:05:25.339428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.193 [2024-12-05 20:05:25.501382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:24.193 [2024-12-05 20:05:25.501438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:24.193 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.454 BaseBdev2 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.454 [ 00:12:24.454 { 00:12:24.454 "name": "BaseBdev2", 00:12:24.454 "aliases": [ 00:12:24.454 "1af657e3-d6c2-441f-8439-d1ffb271be6b" 00:12:24.454 ], 00:12:24.454 "product_name": "Malloc disk", 00:12:24.454 "block_size": 512, 00:12:24.454 "num_blocks": 65536, 00:12:24.454 "uuid": "1af657e3-d6c2-441f-8439-d1ffb271be6b", 00:12:24.454 "assigned_rate_limits": { 00:12:24.454 "rw_ios_per_sec": 0, 00:12:24.454 "rw_mbytes_per_sec": 0, 00:12:24.454 "r_mbytes_per_sec": 0, 00:12:24.454 "w_mbytes_per_sec": 0 00:12:24.454 }, 00:12:24.454 "claimed": false, 00:12:24.454 "zoned": false, 00:12:24.454 "supported_io_types": { 00:12:24.454 "read": true, 00:12:24.454 "write": true, 00:12:24.454 "unmap": true, 00:12:24.454 "flush": true, 00:12:24.454 "reset": true, 00:12:24.454 "nvme_admin": false, 00:12:24.454 "nvme_io": false, 00:12:24.454 "nvme_io_md": false, 00:12:24.454 "write_zeroes": true, 00:12:24.454 "zcopy": true, 00:12:24.454 "get_zone_info": false, 00:12:24.454 "zone_management": false, 00:12:24.454 "zone_append": false, 00:12:24.454 "compare": false, 00:12:24.454 "compare_and_write": false, 00:12:24.454 "abort": true, 00:12:24.454 "seek_hole": false, 00:12:24.454 "seek_data": false, 00:12:24.454 "copy": true, 00:12:24.454 "nvme_iov_md": false 00:12:24.454 }, 00:12:24.454 "memory_domains": [ 00:12:24.454 { 00:12:24.454 "dma_device_id": "system", 00:12:24.454 "dma_device_type": 1 00:12:24.454 }, 00:12:24.454 { 00:12:24.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.454 "dma_device_type": 2 00:12:24.454 } 00:12:24.454 ], 00:12:24.454 "driver_specific": {} 00:12:24.454 } 00:12:24.454 ] 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.454 BaseBdev3 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.454 [ 00:12:24.454 { 00:12:24.454 "name": "BaseBdev3", 00:12:24.454 "aliases": [ 00:12:24.454 "0bc851d2-8bfd-45ae-b368-62f4f8cf86f1" 00:12:24.454 ], 00:12:24.454 "product_name": "Malloc disk", 00:12:24.454 "block_size": 512, 00:12:24.454 "num_blocks": 65536, 00:12:24.454 "uuid": "0bc851d2-8bfd-45ae-b368-62f4f8cf86f1", 00:12:24.454 "assigned_rate_limits": { 00:12:24.454 "rw_ios_per_sec": 0, 00:12:24.454 "rw_mbytes_per_sec": 0, 00:12:24.454 "r_mbytes_per_sec": 0, 00:12:24.454 "w_mbytes_per_sec": 0 00:12:24.454 }, 00:12:24.454 "claimed": false, 00:12:24.454 "zoned": false, 00:12:24.454 "supported_io_types": { 00:12:24.454 "read": true, 00:12:24.454 "write": true, 00:12:24.454 "unmap": true, 00:12:24.454 "flush": true, 00:12:24.454 "reset": true, 00:12:24.454 "nvme_admin": false, 00:12:24.454 "nvme_io": false, 00:12:24.454 "nvme_io_md": false, 00:12:24.454 "write_zeroes": true, 00:12:24.454 "zcopy": true, 00:12:24.454 "get_zone_info": false, 00:12:24.454 "zone_management": false, 00:12:24.454 "zone_append": false, 00:12:24.454 "compare": false, 00:12:24.454 "compare_and_write": false, 00:12:24.454 "abort": true, 00:12:24.454 "seek_hole": false, 00:12:24.454 "seek_data": false, 00:12:24.454 "copy": true, 00:12:24.454 "nvme_iov_md": false 00:12:24.454 }, 00:12:24.454 "memory_domains": [ 00:12:24.454 { 00:12:24.454 "dma_device_id": "system", 00:12:24.454 "dma_device_type": 1 00:12:24.454 }, 00:12:24.454 { 00:12:24.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.454 "dma_device_type": 2 00:12:24.454 } 00:12:24.454 ], 00:12:24.454 "driver_specific": {} 00:12:24.454 } 00:12:24.454 ] 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.454 BaseBdev4 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.454 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:24.455 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.455 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.455 [ 00:12:24.455 { 00:12:24.455 "name": "BaseBdev4", 00:12:24.455 "aliases": [ 00:12:24.455 "ec80ec40-cebf-4733-952e-89a85fa05a14" 00:12:24.455 ], 00:12:24.455 "product_name": "Malloc disk", 00:12:24.455 "block_size": 512, 00:12:24.455 "num_blocks": 65536, 00:12:24.455 "uuid": "ec80ec40-cebf-4733-952e-89a85fa05a14", 00:12:24.455 "assigned_rate_limits": { 00:12:24.455 "rw_ios_per_sec": 0, 00:12:24.455 "rw_mbytes_per_sec": 0, 00:12:24.455 "r_mbytes_per_sec": 0, 00:12:24.455 "w_mbytes_per_sec": 0 00:12:24.455 }, 00:12:24.455 "claimed": false, 00:12:24.455 "zoned": false, 00:12:24.455 "supported_io_types": { 00:12:24.455 "read": true, 00:12:24.455 "write": true, 00:12:24.455 "unmap": true, 00:12:24.455 "flush": true, 00:12:24.455 "reset": true, 00:12:24.455 "nvme_admin": false, 00:12:24.455 "nvme_io": false, 00:12:24.455 "nvme_io_md": false, 00:12:24.455 "write_zeroes": true, 00:12:24.455 "zcopy": true, 00:12:24.455 "get_zone_info": false, 00:12:24.455 "zone_management": false, 00:12:24.455 "zone_append": false, 00:12:24.455 "compare": false, 00:12:24.455 "compare_and_write": false, 00:12:24.455 "abort": true, 00:12:24.455 "seek_hole": false, 00:12:24.455 "seek_data": false, 00:12:24.455 "copy": true, 00:12:24.455 "nvme_iov_md": false 00:12:24.455 }, 00:12:24.455 "memory_domains": [ 00:12:24.455 { 00:12:24.455 "dma_device_id": "system", 00:12:24.455 "dma_device_type": 1 00:12:24.455 }, 00:12:24.455 { 00:12:24.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.455 "dma_device_type": 2 00:12:24.715 } 00:12:24.715 ], 00:12:24.715 "driver_specific": {} 00:12:24.715 } 00:12:24.715 ] 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.715 [2024-12-05 20:05:25.892713] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:24.715 [2024-12-05 20:05:25.892802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:24.715 [2024-12-05 20:05:25.892848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.715 [2024-12-05 20:05:25.894728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:24.715 [2024-12-05 20:05:25.894824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.715 "name": "Existed_Raid", 00:12:24.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.715 "strip_size_kb": 64, 00:12:24.715 "state": "configuring", 00:12:24.715 "raid_level": "raid0", 00:12:24.715 "superblock": false, 00:12:24.715 "num_base_bdevs": 4, 00:12:24.715 "num_base_bdevs_discovered": 3, 00:12:24.715 "num_base_bdevs_operational": 4, 00:12:24.715 "base_bdevs_list": [ 00:12:24.715 { 00:12:24.715 "name": "BaseBdev1", 00:12:24.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.715 "is_configured": false, 00:12:24.715 "data_offset": 0, 00:12:24.715 "data_size": 0 00:12:24.715 }, 00:12:24.715 { 00:12:24.715 "name": "BaseBdev2", 00:12:24.715 "uuid": "1af657e3-d6c2-441f-8439-d1ffb271be6b", 00:12:24.715 "is_configured": true, 00:12:24.715 "data_offset": 0, 00:12:24.715 "data_size": 65536 00:12:24.715 }, 00:12:24.715 { 00:12:24.715 "name": "BaseBdev3", 00:12:24.715 "uuid": "0bc851d2-8bfd-45ae-b368-62f4f8cf86f1", 00:12:24.715 "is_configured": true, 00:12:24.715 "data_offset": 0, 00:12:24.715 "data_size": 65536 00:12:24.715 }, 00:12:24.715 { 00:12:24.715 "name": "BaseBdev4", 00:12:24.715 "uuid": "ec80ec40-cebf-4733-952e-89a85fa05a14", 00:12:24.715 "is_configured": true, 00:12:24.715 "data_offset": 0, 00:12:24.715 "data_size": 65536 00:12:24.715 } 00:12:24.715 ] 00:12:24.715 }' 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.715 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.976 [2024-12-05 20:05:26.264108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.976 "name": "Existed_Raid", 00:12:24.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.976 "strip_size_kb": 64, 00:12:24.976 "state": "configuring", 00:12:24.976 "raid_level": "raid0", 00:12:24.976 "superblock": false, 00:12:24.976 "num_base_bdevs": 4, 00:12:24.976 "num_base_bdevs_discovered": 2, 00:12:24.976 "num_base_bdevs_operational": 4, 00:12:24.976 "base_bdevs_list": [ 00:12:24.976 { 00:12:24.976 "name": "BaseBdev1", 00:12:24.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.976 "is_configured": false, 00:12:24.976 "data_offset": 0, 00:12:24.976 "data_size": 0 00:12:24.976 }, 00:12:24.976 { 00:12:24.976 "name": null, 00:12:24.976 "uuid": "1af657e3-d6c2-441f-8439-d1ffb271be6b", 00:12:24.976 "is_configured": false, 00:12:24.976 "data_offset": 0, 00:12:24.976 "data_size": 65536 00:12:24.976 }, 00:12:24.976 { 00:12:24.976 "name": "BaseBdev3", 00:12:24.976 "uuid": "0bc851d2-8bfd-45ae-b368-62f4f8cf86f1", 00:12:24.976 "is_configured": true, 00:12:24.976 "data_offset": 0, 00:12:24.976 "data_size": 65536 00:12:24.976 }, 00:12:24.976 { 00:12:24.976 "name": "BaseBdev4", 00:12:24.976 "uuid": "ec80ec40-cebf-4733-952e-89a85fa05a14", 00:12:24.976 "is_configured": true, 00:12:24.976 "data_offset": 0, 00:12:24.976 "data_size": 65536 00:12:24.976 } 00:12:24.976 ] 00:12:24.976 }' 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.976 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.236 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.236 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:25.236 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.236 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.497 [2024-12-05 20:05:26.751721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.497 BaseBdev1 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.497 [ 00:12:25.497 { 00:12:25.497 "name": "BaseBdev1", 00:12:25.497 "aliases": [ 00:12:25.497 "35a29201-84de-43f2-a0c4-47b852fa70fb" 00:12:25.497 ], 00:12:25.497 "product_name": "Malloc disk", 00:12:25.497 "block_size": 512, 00:12:25.497 "num_blocks": 65536, 00:12:25.497 "uuid": "35a29201-84de-43f2-a0c4-47b852fa70fb", 00:12:25.497 "assigned_rate_limits": { 00:12:25.497 "rw_ios_per_sec": 0, 00:12:25.497 "rw_mbytes_per_sec": 0, 00:12:25.497 "r_mbytes_per_sec": 0, 00:12:25.497 "w_mbytes_per_sec": 0 00:12:25.497 }, 00:12:25.497 "claimed": true, 00:12:25.497 "claim_type": "exclusive_write", 00:12:25.497 "zoned": false, 00:12:25.497 "supported_io_types": { 00:12:25.497 "read": true, 00:12:25.497 "write": true, 00:12:25.497 "unmap": true, 00:12:25.497 "flush": true, 00:12:25.497 "reset": true, 00:12:25.497 "nvme_admin": false, 00:12:25.497 "nvme_io": false, 00:12:25.497 "nvme_io_md": false, 00:12:25.497 "write_zeroes": true, 00:12:25.497 "zcopy": true, 00:12:25.497 "get_zone_info": false, 00:12:25.497 "zone_management": false, 00:12:25.497 "zone_append": false, 00:12:25.497 "compare": false, 00:12:25.497 "compare_and_write": false, 00:12:25.497 "abort": true, 00:12:25.497 "seek_hole": false, 00:12:25.497 "seek_data": false, 00:12:25.497 "copy": true, 00:12:25.497 "nvme_iov_md": false 00:12:25.497 }, 00:12:25.497 "memory_domains": [ 00:12:25.497 { 00:12:25.497 "dma_device_id": "system", 00:12:25.497 "dma_device_type": 1 00:12:25.497 }, 00:12:25.497 { 00:12:25.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.497 "dma_device_type": 2 00:12:25.497 } 00:12:25.497 ], 00:12:25.497 "driver_specific": {} 00:12:25.497 } 00:12:25.497 ] 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:25.497 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:25.498 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.498 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.498 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:25.498 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.498 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.498 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.498 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.498 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.498 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.498 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.498 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.498 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.498 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.498 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.498 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.498 "name": "Existed_Raid", 00:12:25.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.498 "strip_size_kb": 64, 00:12:25.498 "state": "configuring", 00:12:25.498 "raid_level": "raid0", 00:12:25.498 "superblock": false, 00:12:25.498 "num_base_bdevs": 4, 00:12:25.498 "num_base_bdevs_discovered": 3, 00:12:25.498 "num_base_bdevs_operational": 4, 00:12:25.498 "base_bdevs_list": [ 00:12:25.498 { 00:12:25.498 "name": "BaseBdev1", 00:12:25.498 "uuid": "35a29201-84de-43f2-a0c4-47b852fa70fb", 00:12:25.498 "is_configured": true, 00:12:25.498 "data_offset": 0, 00:12:25.498 "data_size": 65536 00:12:25.498 }, 00:12:25.498 { 00:12:25.498 "name": null, 00:12:25.498 "uuid": "1af657e3-d6c2-441f-8439-d1ffb271be6b", 00:12:25.498 "is_configured": false, 00:12:25.498 "data_offset": 0, 00:12:25.498 "data_size": 65536 00:12:25.498 }, 00:12:25.498 { 00:12:25.498 "name": "BaseBdev3", 00:12:25.498 "uuid": "0bc851d2-8bfd-45ae-b368-62f4f8cf86f1", 00:12:25.498 "is_configured": true, 00:12:25.498 "data_offset": 0, 00:12:25.498 "data_size": 65536 00:12:25.498 }, 00:12:25.498 { 00:12:25.498 "name": "BaseBdev4", 00:12:25.498 "uuid": "ec80ec40-cebf-4733-952e-89a85fa05a14", 00:12:25.498 "is_configured": true, 00:12:25.498 "data_offset": 0, 00:12:25.498 "data_size": 65536 00:12:25.498 } 00:12:25.498 ] 00:12:25.498 }' 00:12:25.498 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.498 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.067 [2024-12-05 20:05:27.250994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.067 "name": "Existed_Raid", 00:12:26.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.067 "strip_size_kb": 64, 00:12:26.067 "state": "configuring", 00:12:26.067 "raid_level": "raid0", 00:12:26.067 "superblock": false, 00:12:26.067 "num_base_bdevs": 4, 00:12:26.067 "num_base_bdevs_discovered": 2, 00:12:26.067 "num_base_bdevs_operational": 4, 00:12:26.067 "base_bdevs_list": [ 00:12:26.067 { 00:12:26.067 "name": "BaseBdev1", 00:12:26.067 "uuid": "35a29201-84de-43f2-a0c4-47b852fa70fb", 00:12:26.067 "is_configured": true, 00:12:26.067 "data_offset": 0, 00:12:26.067 "data_size": 65536 00:12:26.067 }, 00:12:26.067 { 00:12:26.067 "name": null, 00:12:26.067 "uuid": "1af657e3-d6c2-441f-8439-d1ffb271be6b", 00:12:26.067 "is_configured": false, 00:12:26.067 "data_offset": 0, 00:12:26.067 "data_size": 65536 00:12:26.067 }, 00:12:26.067 { 00:12:26.067 "name": null, 00:12:26.067 "uuid": "0bc851d2-8bfd-45ae-b368-62f4f8cf86f1", 00:12:26.067 "is_configured": false, 00:12:26.067 "data_offset": 0, 00:12:26.067 "data_size": 65536 00:12:26.067 }, 00:12:26.067 { 00:12:26.067 "name": "BaseBdev4", 00:12:26.067 "uuid": "ec80ec40-cebf-4733-952e-89a85fa05a14", 00:12:26.067 "is_configured": true, 00:12:26.067 "data_offset": 0, 00:12:26.067 "data_size": 65536 00:12:26.067 } 00:12:26.067 ] 00:12:26.067 }' 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.067 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.327 [2024-12-05 20:05:27.718165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.327 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.587 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.587 "name": "Existed_Raid", 00:12:26.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.587 "strip_size_kb": 64, 00:12:26.587 "state": "configuring", 00:12:26.587 "raid_level": "raid0", 00:12:26.587 "superblock": false, 00:12:26.587 "num_base_bdevs": 4, 00:12:26.587 "num_base_bdevs_discovered": 3, 00:12:26.587 "num_base_bdevs_operational": 4, 00:12:26.587 "base_bdevs_list": [ 00:12:26.587 { 00:12:26.587 "name": "BaseBdev1", 00:12:26.587 "uuid": "35a29201-84de-43f2-a0c4-47b852fa70fb", 00:12:26.587 "is_configured": true, 00:12:26.587 "data_offset": 0, 00:12:26.587 "data_size": 65536 00:12:26.587 }, 00:12:26.587 { 00:12:26.587 "name": null, 00:12:26.587 "uuid": "1af657e3-d6c2-441f-8439-d1ffb271be6b", 00:12:26.587 "is_configured": false, 00:12:26.587 "data_offset": 0, 00:12:26.587 "data_size": 65536 00:12:26.587 }, 00:12:26.587 { 00:12:26.587 "name": "BaseBdev3", 00:12:26.587 "uuid": "0bc851d2-8bfd-45ae-b368-62f4f8cf86f1", 00:12:26.587 "is_configured": true, 00:12:26.587 "data_offset": 0, 00:12:26.587 "data_size": 65536 00:12:26.587 }, 00:12:26.587 { 00:12:26.587 "name": "BaseBdev4", 00:12:26.587 "uuid": "ec80ec40-cebf-4733-952e-89a85fa05a14", 00:12:26.587 "is_configured": true, 00:12:26.587 "data_offset": 0, 00:12:26.587 "data_size": 65536 00:12:26.587 } 00:12:26.587 ] 00:12:26.587 }' 00:12:26.587 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.588 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.847 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.847 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:26.847 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.847 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.847 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.847 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:26.847 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:26.847 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.847 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.847 [2024-12-05 20:05:28.213349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:27.107 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.107 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:27.107 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.107 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.107 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:27.107 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.107 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.107 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.107 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.107 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.107 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.107 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.107 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.107 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.107 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.107 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.107 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.107 "name": "Existed_Raid", 00:12:27.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.107 "strip_size_kb": 64, 00:12:27.107 "state": "configuring", 00:12:27.107 "raid_level": "raid0", 00:12:27.107 "superblock": false, 00:12:27.107 "num_base_bdevs": 4, 00:12:27.107 "num_base_bdevs_discovered": 2, 00:12:27.107 "num_base_bdevs_operational": 4, 00:12:27.107 "base_bdevs_list": [ 00:12:27.107 { 00:12:27.107 "name": null, 00:12:27.107 "uuid": "35a29201-84de-43f2-a0c4-47b852fa70fb", 00:12:27.107 "is_configured": false, 00:12:27.107 "data_offset": 0, 00:12:27.107 "data_size": 65536 00:12:27.107 }, 00:12:27.107 { 00:12:27.107 "name": null, 00:12:27.107 "uuid": "1af657e3-d6c2-441f-8439-d1ffb271be6b", 00:12:27.107 "is_configured": false, 00:12:27.107 "data_offset": 0, 00:12:27.107 "data_size": 65536 00:12:27.107 }, 00:12:27.107 { 00:12:27.107 "name": "BaseBdev3", 00:12:27.107 "uuid": "0bc851d2-8bfd-45ae-b368-62f4f8cf86f1", 00:12:27.107 "is_configured": true, 00:12:27.107 "data_offset": 0, 00:12:27.107 "data_size": 65536 00:12:27.107 }, 00:12:27.107 { 00:12:27.107 "name": "BaseBdev4", 00:12:27.108 "uuid": "ec80ec40-cebf-4733-952e-89a85fa05a14", 00:12:27.108 "is_configured": true, 00:12:27.108 "data_offset": 0, 00:12:27.108 "data_size": 65536 00:12:27.108 } 00:12:27.108 ] 00:12:27.108 }' 00:12:27.108 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.108 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.677 [2024-12-05 20:05:28.863848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.677 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.678 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.678 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.678 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.678 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.678 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.678 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.678 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.678 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.678 "name": "Existed_Raid", 00:12:27.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.678 "strip_size_kb": 64, 00:12:27.678 "state": "configuring", 00:12:27.678 "raid_level": "raid0", 00:12:27.678 "superblock": false, 00:12:27.678 "num_base_bdevs": 4, 00:12:27.678 "num_base_bdevs_discovered": 3, 00:12:27.678 "num_base_bdevs_operational": 4, 00:12:27.678 "base_bdevs_list": [ 00:12:27.678 { 00:12:27.678 "name": null, 00:12:27.678 "uuid": "35a29201-84de-43f2-a0c4-47b852fa70fb", 00:12:27.678 "is_configured": false, 00:12:27.678 "data_offset": 0, 00:12:27.678 "data_size": 65536 00:12:27.678 }, 00:12:27.678 { 00:12:27.678 "name": "BaseBdev2", 00:12:27.678 "uuid": "1af657e3-d6c2-441f-8439-d1ffb271be6b", 00:12:27.678 "is_configured": true, 00:12:27.678 "data_offset": 0, 00:12:27.678 "data_size": 65536 00:12:27.678 }, 00:12:27.678 { 00:12:27.678 "name": "BaseBdev3", 00:12:27.678 "uuid": "0bc851d2-8bfd-45ae-b368-62f4f8cf86f1", 00:12:27.678 "is_configured": true, 00:12:27.678 "data_offset": 0, 00:12:27.678 "data_size": 65536 00:12:27.678 }, 00:12:27.678 { 00:12:27.678 "name": "BaseBdev4", 00:12:27.678 "uuid": "ec80ec40-cebf-4733-952e-89a85fa05a14", 00:12:27.678 "is_configured": true, 00:12:27.678 "data_offset": 0, 00:12:27.678 "data_size": 65536 00:12:27.678 } 00:12:27.678 ] 00:12:27.678 }' 00:12:27.678 20:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.678 20:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.938 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:27.938 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.938 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.938 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.938 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.198 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:28.198 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.198 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:28.198 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.198 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.198 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.198 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 35a29201-84de-43f2-a0c4-47b852fa70fb 00:12:28.198 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.198 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.198 [2024-12-05 20:05:29.475750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:28.198 [2024-12-05 20:05:29.475873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:28.198 [2024-12-05 20:05:29.475899] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:28.198 [2024-12-05 20:05:29.476203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:28.199 [2024-12-05 20:05:29.476383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:28.199 [2024-12-05 20:05:29.476396] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:28.199 [2024-12-05 20:05:29.476679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.199 NewBaseBdev 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.199 [ 00:12:28.199 { 00:12:28.199 "name": "NewBaseBdev", 00:12:28.199 "aliases": [ 00:12:28.199 "35a29201-84de-43f2-a0c4-47b852fa70fb" 00:12:28.199 ], 00:12:28.199 "product_name": "Malloc disk", 00:12:28.199 "block_size": 512, 00:12:28.199 "num_blocks": 65536, 00:12:28.199 "uuid": "35a29201-84de-43f2-a0c4-47b852fa70fb", 00:12:28.199 "assigned_rate_limits": { 00:12:28.199 "rw_ios_per_sec": 0, 00:12:28.199 "rw_mbytes_per_sec": 0, 00:12:28.199 "r_mbytes_per_sec": 0, 00:12:28.199 "w_mbytes_per_sec": 0 00:12:28.199 }, 00:12:28.199 "claimed": true, 00:12:28.199 "claim_type": "exclusive_write", 00:12:28.199 "zoned": false, 00:12:28.199 "supported_io_types": { 00:12:28.199 "read": true, 00:12:28.199 "write": true, 00:12:28.199 "unmap": true, 00:12:28.199 "flush": true, 00:12:28.199 "reset": true, 00:12:28.199 "nvme_admin": false, 00:12:28.199 "nvme_io": false, 00:12:28.199 "nvme_io_md": false, 00:12:28.199 "write_zeroes": true, 00:12:28.199 "zcopy": true, 00:12:28.199 "get_zone_info": false, 00:12:28.199 "zone_management": false, 00:12:28.199 "zone_append": false, 00:12:28.199 "compare": false, 00:12:28.199 "compare_and_write": false, 00:12:28.199 "abort": true, 00:12:28.199 "seek_hole": false, 00:12:28.199 "seek_data": false, 00:12:28.199 "copy": true, 00:12:28.199 "nvme_iov_md": false 00:12:28.199 }, 00:12:28.199 "memory_domains": [ 00:12:28.199 { 00:12:28.199 "dma_device_id": "system", 00:12:28.199 "dma_device_type": 1 00:12:28.199 }, 00:12:28.199 { 00:12:28.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.199 "dma_device_type": 2 00:12:28.199 } 00:12:28.199 ], 00:12:28.199 "driver_specific": {} 00:12:28.199 } 00:12:28.199 ] 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.199 "name": "Existed_Raid", 00:12:28.199 "uuid": "9d7fdfc7-16d4-49ca-86cf-c59361b4559f", 00:12:28.199 "strip_size_kb": 64, 00:12:28.199 "state": "online", 00:12:28.199 "raid_level": "raid0", 00:12:28.199 "superblock": false, 00:12:28.199 "num_base_bdevs": 4, 00:12:28.199 "num_base_bdevs_discovered": 4, 00:12:28.199 "num_base_bdevs_operational": 4, 00:12:28.199 "base_bdevs_list": [ 00:12:28.199 { 00:12:28.199 "name": "NewBaseBdev", 00:12:28.199 "uuid": "35a29201-84de-43f2-a0c4-47b852fa70fb", 00:12:28.199 "is_configured": true, 00:12:28.199 "data_offset": 0, 00:12:28.199 "data_size": 65536 00:12:28.199 }, 00:12:28.199 { 00:12:28.199 "name": "BaseBdev2", 00:12:28.199 "uuid": "1af657e3-d6c2-441f-8439-d1ffb271be6b", 00:12:28.199 "is_configured": true, 00:12:28.199 "data_offset": 0, 00:12:28.199 "data_size": 65536 00:12:28.199 }, 00:12:28.199 { 00:12:28.199 "name": "BaseBdev3", 00:12:28.199 "uuid": "0bc851d2-8bfd-45ae-b368-62f4f8cf86f1", 00:12:28.199 "is_configured": true, 00:12:28.199 "data_offset": 0, 00:12:28.199 "data_size": 65536 00:12:28.199 }, 00:12:28.199 { 00:12:28.199 "name": "BaseBdev4", 00:12:28.199 "uuid": "ec80ec40-cebf-4733-952e-89a85fa05a14", 00:12:28.199 "is_configured": true, 00:12:28.199 "data_offset": 0, 00:12:28.199 "data_size": 65536 00:12:28.199 } 00:12:28.199 ] 00:12:28.199 }' 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.199 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.770 [2024-12-05 20:05:30.015333] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:28.770 "name": "Existed_Raid", 00:12:28.770 "aliases": [ 00:12:28.770 "9d7fdfc7-16d4-49ca-86cf-c59361b4559f" 00:12:28.770 ], 00:12:28.770 "product_name": "Raid Volume", 00:12:28.770 "block_size": 512, 00:12:28.770 "num_blocks": 262144, 00:12:28.770 "uuid": "9d7fdfc7-16d4-49ca-86cf-c59361b4559f", 00:12:28.770 "assigned_rate_limits": { 00:12:28.770 "rw_ios_per_sec": 0, 00:12:28.770 "rw_mbytes_per_sec": 0, 00:12:28.770 "r_mbytes_per_sec": 0, 00:12:28.770 "w_mbytes_per_sec": 0 00:12:28.770 }, 00:12:28.770 "claimed": false, 00:12:28.770 "zoned": false, 00:12:28.770 "supported_io_types": { 00:12:28.770 "read": true, 00:12:28.770 "write": true, 00:12:28.770 "unmap": true, 00:12:28.770 "flush": true, 00:12:28.770 "reset": true, 00:12:28.770 "nvme_admin": false, 00:12:28.770 "nvme_io": false, 00:12:28.770 "nvme_io_md": false, 00:12:28.770 "write_zeroes": true, 00:12:28.770 "zcopy": false, 00:12:28.770 "get_zone_info": false, 00:12:28.770 "zone_management": false, 00:12:28.770 "zone_append": false, 00:12:28.770 "compare": false, 00:12:28.770 "compare_and_write": false, 00:12:28.770 "abort": false, 00:12:28.770 "seek_hole": false, 00:12:28.770 "seek_data": false, 00:12:28.770 "copy": false, 00:12:28.770 "nvme_iov_md": false 00:12:28.770 }, 00:12:28.770 "memory_domains": [ 00:12:28.770 { 00:12:28.770 "dma_device_id": "system", 00:12:28.770 "dma_device_type": 1 00:12:28.770 }, 00:12:28.770 { 00:12:28.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.770 "dma_device_type": 2 00:12:28.770 }, 00:12:28.770 { 00:12:28.770 "dma_device_id": "system", 00:12:28.770 "dma_device_type": 1 00:12:28.770 }, 00:12:28.770 { 00:12:28.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.770 "dma_device_type": 2 00:12:28.770 }, 00:12:28.770 { 00:12:28.770 "dma_device_id": "system", 00:12:28.770 "dma_device_type": 1 00:12:28.770 }, 00:12:28.770 { 00:12:28.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.770 "dma_device_type": 2 00:12:28.770 }, 00:12:28.770 { 00:12:28.770 "dma_device_id": "system", 00:12:28.770 "dma_device_type": 1 00:12:28.770 }, 00:12:28.770 { 00:12:28.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.770 "dma_device_type": 2 00:12:28.770 } 00:12:28.770 ], 00:12:28.770 "driver_specific": { 00:12:28.770 "raid": { 00:12:28.770 "uuid": "9d7fdfc7-16d4-49ca-86cf-c59361b4559f", 00:12:28.770 "strip_size_kb": 64, 00:12:28.770 "state": "online", 00:12:28.770 "raid_level": "raid0", 00:12:28.770 "superblock": false, 00:12:28.770 "num_base_bdevs": 4, 00:12:28.770 "num_base_bdevs_discovered": 4, 00:12:28.770 "num_base_bdevs_operational": 4, 00:12:28.770 "base_bdevs_list": [ 00:12:28.770 { 00:12:28.770 "name": "NewBaseBdev", 00:12:28.770 "uuid": "35a29201-84de-43f2-a0c4-47b852fa70fb", 00:12:28.770 "is_configured": true, 00:12:28.770 "data_offset": 0, 00:12:28.770 "data_size": 65536 00:12:28.770 }, 00:12:28.770 { 00:12:28.770 "name": "BaseBdev2", 00:12:28.770 "uuid": "1af657e3-d6c2-441f-8439-d1ffb271be6b", 00:12:28.770 "is_configured": true, 00:12:28.770 "data_offset": 0, 00:12:28.770 "data_size": 65536 00:12:28.770 }, 00:12:28.770 { 00:12:28.770 "name": "BaseBdev3", 00:12:28.770 "uuid": "0bc851d2-8bfd-45ae-b368-62f4f8cf86f1", 00:12:28.770 "is_configured": true, 00:12:28.770 "data_offset": 0, 00:12:28.770 "data_size": 65536 00:12:28.770 }, 00:12:28.770 { 00:12:28.770 "name": "BaseBdev4", 00:12:28.770 "uuid": "ec80ec40-cebf-4733-952e-89a85fa05a14", 00:12:28.770 "is_configured": true, 00:12:28.770 "data_offset": 0, 00:12:28.770 "data_size": 65536 00:12:28.770 } 00:12:28.770 ] 00:12:28.770 } 00:12:28.770 } 00:12:28.770 }' 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:28.770 BaseBdev2 00:12:28.770 BaseBdev3 00:12:28.770 BaseBdev4' 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:28.770 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.771 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.771 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.771 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.771 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.771 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.771 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.771 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:28.771 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.771 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.771 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.771 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.031 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.031 [2024-12-05 20:05:30.314383] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:29.031 [2024-12-05 20:05:30.314472] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.031 [2024-12-05 20:05:30.314581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.031 [2024-12-05 20:05:30.314672] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.032 [2024-12-05 20:05:30.314732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:29.032 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.032 20:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69509 00:12:29.032 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69509 ']' 00:12:29.032 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69509 00:12:29.032 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:29.032 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.032 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69509 00:12:29.032 killing process with pid 69509 00:12:29.032 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.032 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.032 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69509' 00:12:29.032 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69509 00:12:29.032 [2024-12-05 20:05:30.362019] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.032 20:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69509 00:12:29.602 [2024-12-05 20:05:30.766516] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:30.575 00:12:30.575 real 0m11.653s 00:12:30.575 user 0m18.556s 00:12:30.575 sys 0m2.018s 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.575 ************************************ 00:12:30.575 END TEST raid_state_function_test 00:12:30.575 ************************************ 00:12:30.575 20:05:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:30.575 20:05:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:30.575 20:05:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.575 20:05:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:30.575 ************************************ 00:12:30.575 START TEST raid_state_function_test_sb 00:12:30.575 ************************************ 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70176 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70176' 00:12:30.575 Process raid pid: 70176 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70176 00:12:30.575 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70176 ']' 00:12:30.575 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.575 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.575 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.575 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.575 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.835 [2024-12-05 20:05:32.084717] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:12:30.835 [2024-12-05 20:05:32.084930] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.835 [2024-12-05 20:05:32.240200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.095 [2024-12-05 20:05:32.359066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.354 [2024-12-05 20:05:32.569061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.354 [2024-12-05 20:05:32.569181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.614 [2024-12-05 20:05:32.918023] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.614 [2024-12-05 20:05:32.918141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.614 [2024-12-05 20:05:32.918172] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.614 [2024-12-05 20:05:32.918196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.614 [2024-12-05 20:05:32.918214] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.614 [2024-12-05 20:05:32.918235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.614 [2024-12-05 20:05:32.918252] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:31.614 [2024-12-05 20:05:32.918296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.614 "name": "Existed_Raid", 00:12:31.614 "uuid": "a9a65a3b-b9ef-42df-a8cc-6f1d9e7aac06", 00:12:31.614 "strip_size_kb": 64, 00:12:31.614 "state": "configuring", 00:12:31.614 "raid_level": "raid0", 00:12:31.614 "superblock": true, 00:12:31.614 "num_base_bdevs": 4, 00:12:31.614 "num_base_bdevs_discovered": 0, 00:12:31.614 "num_base_bdevs_operational": 4, 00:12:31.614 "base_bdevs_list": [ 00:12:31.614 { 00:12:31.614 "name": "BaseBdev1", 00:12:31.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.614 "is_configured": false, 00:12:31.614 "data_offset": 0, 00:12:31.614 "data_size": 0 00:12:31.614 }, 00:12:31.614 { 00:12:31.614 "name": "BaseBdev2", 00:12:31.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.614 "is_configured": false, 00:12:31.614 "data_offset": 0, 00:12:31.614 "data_size": 0 00:12:31.614 }, 00:12:31.614 { 00:12:31.614 "name": "BaseBdev3", 00:12:31.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.614 "is_configured": false, 00:12:31.614 "data_offset": 0, 00:12:31.614 "data_size": 0 00:12:31.614 }, 00:12:31.614 { 00:12:31.614 "name": "BaseBdev4", 00:12:31.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.614 "is_configured": false, 00:12:31.614 "data_offset": 0, 00:12:31.614 "data_size": 0 00:12:31.614 } 00:12:31.614 ] 00:12:31.614 }' 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.614 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.184 [2024-12-05 20:05:33.381189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:32.184 [2024-12-05 20:05:33.381233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.184 [2024-12-05 20:05:33.393186] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:32.184 [2024-12-05 20:05:33.393273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:32.184 [2024-12-05 20:05:33.393305] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:32.184 [2024-12-05 20:05:33.393330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:32.184 [2024-12-05 20:05:33.393401] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:32.184 [2024-12-05 20:05:33.393443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:32.184 [2024-12-05 20:05:33.393473] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:32.184 [2024-12-05 20:05:33.393515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.184 [2024-12-05 20:05:33.442612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.184 BaseBdev1 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.184 [ 00:12:32.184 { 00:12:32.184 "name": "BaseBdev1", 00:12:32.184 "aliases": [ 00:12:32.184 "84b0d28e-ebc8-44aa-97b1-79d5363def05" 00:12:32.184 ], 00:12:32.184 "product_name": "Malloc disk", 00:12:32.184 "block_size": 512, 00:12:32.184 "num_blocks": 65536, 00:12:32.184 "uuid": "84b0d28e-ebc8-44aa-97b1-79d5363def05", 00:12:32.184 "assigned_rate_limits": { 00:12:32.184 "rw_ios_per_sec": 0, 00:12:32.184 "rw_mbytes_per_sec": 0, 00:12:32.184 "r_mbytes_per_sec": 0, 00:12:32.184 "w_mbytes_per_sec": 0 00:12:32.184 }, 00:12:32.184 "claimed": true, 00:12:32.184 "claim_type": "exclusive_write", 00:12:32.184 "zoned": false, 00:12:32.184 "supported_io_types": { 00:12:32.184 "read": true, 00:12:32.184 "write": true, 00:12:32.184 "unmap": true, 00:12:32.184 "flush": true, 00:12:32.184 "reset": true, 00:12:32.184 "nvme_admin": false, 00:12:32.184 "nvme_io": false, 00:12:32.184 "nvme_io_md": false, 00:12:32.184 "write_zeroes": true, 00:12:32.184 "zcopy": true, 00:12:32.184 "get_zone_info": false, 00:12:32.184 "zone_management": false, 00:12:32.184 "zone_append": false, 00:12:32.184 "compare": false, 00:12:32.184 "compare_and_write": false, 00:12:32.184 "abort": true, 00:12:32.184 "seek_hole": false, 00:12:32.184 "seek_data": false, 00:12:32.184 "copy": true, 00:12:32.184 "nvme_iov_md": false 00:12:32.184 }, 00:12:32.184 "memory_domains": [ 00:12:32.184 { 00:12:32.184 "dma_device_id": "system", 00:12:32.184 "dma_device_type": 1 00:12:32.184 }, 00:12:32.184 { 00:12:32.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.184 "dma_device_type": 2 00:12:32.184 } 00:12:32.184 ], 00:12:32.184 "driver_specific": {} 00:12:32.184 } 00:12:32.184 ] 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:32.184 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.185 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.185 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.185 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.185 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.185 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.185 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.185 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.185 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.185 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.185 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.185 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.185 "name": "Existed_Raid", 00:12:32.185 "uuid": "01eca53f-9cfc-4687-9118-f29bac0de1ad", 00:12:32.185 "strip_size_kb": 64, 00:12:32.185 "state": "configuring", 00:12:32.185 "raid_level": "raid0", 00:12:32.185 "superblock": true, 00:12:32.185 "num_base_bdevs": 4, 00:12:32.185 "num_base_bdevs_discovered": 1, 00:12:32.185 "num_base_bdevs_operational": 4, 00:12:32.185 "base_bdevs_list": [ 00:12:32.185 { 00:12:32.185 "name": "BaseBdev1", 00:12:32.185 "uuid": "84b0d28e-ebc8-44aa-97b1-79d5363def05", 00:12:32.185 "is_configured": true, 00:12:32.185 "data_offset": 2048, 00:12:32.185 "data_size": 63488 00:12:32.185 }, 00:12:32.185 { 00:12:32.185 "name": "BaseBdev2", 00:12:32.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.185 "is_configured": false, 00:12:32.185 "data_offset": 0, 00:12:32.185 "data_size": 0 00:12:32.185 }, 00:12:32.185 { 00:12:32.185 "name": "BaseBdev3", 00:12:32.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.185 "is_configured": false, 00:12:32.185 "data_offset": 0, 00:12:32.185 "data_size": 0 00:12:32.185 }, 00:12:32.185 { 00:12:32.185 "name": "BaseBdev4", 00:12:32.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.185 "is_configured": false, 00:12:32.185 "data_offset": 0, 00:12:32.185 "data_size": 0 00:12:32.185 } 00:12:32.185 ] 00:12:32.185 }' 00:12:32.185 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.185 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.754 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:32.754 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.754 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.754 [2024-12-05 20:05:33.917848] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:32.754 [2024-12-05 20:05:33.917908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:32.754 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.754 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:32.754 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.754 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.754 [2024-12-05 20:05:33.929886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.754 [2024-12-05 20:05:33.931778] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:32.754 [2024-12-05 20:05:33.931828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:32.755 [2024-12-05 20:05:33.931841] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:32.755 [2024-12-05 20:05:33.931853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:32.755 [2024-12-05 20:05:33.931860] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:32.755 [2024-12-05 20:05:33.931869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.755 "name": "Existed_Raid", 00:12:32.755 "uuid": "484b7e48-36c7-479c-a4c5-c64ddff7d44b", 00:12:32.755 "strip_size_kb": 64, 00:12:32.755 "state": "configuring", 00:12:32.755 "raid_level": "raid0", 00:12:32.755 "superblock": true, 00:12:32.755 "num_base_bdevs": 4, 00:12:32.755 "num_base_bdevs_discovered": 1, 00:12:32.755 "num_base_bdevs_operational": 4, 00:12:32.755 "base_bdevs_list": [ 00:12:32.755 { 00:12:32.755 "name": "BaseBdev1", 00:12:32.755 "uuid": "84b0d28e-ebc8-44aa-97b1-79d5363def05", 00:12:32.755 "is_configured": true, 00:12:32.755 "data_offset": 2048, 00:12:32.755 "data_size": 63488 00:12:32.755 }, 00:12:32.755 { 00:12:32.755 "name": "BaseBdev2", 00:12:32.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.755 "is_configured": false, 00:12:32.755 "data_offset": 0, 00:12:32.755 "data_size": 0 00:12:32.755 }, 00:12:32.755 { 00:12:32.755 "name": "BaseBdev3", 00:12:32.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.755 "is_configured": false, 00:12:32.755 "data_offset": 0, 00:12:32.755 "data_size": 0 00:12:32.755 }, 00:12:32.755 { 00:12:32.755 "name": "BaseBdev4", 00:12:32.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.755 "is_configured": false, 00:12:32.755 "data_offset": 0, 00:12:32.755 "data_size": 0 00:12:32.755 } 00:12:32.755 ] 00:12:32.755 }' 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.755 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.015 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:33.015 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.015 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.276 [2024-12-05 20:05:34.462717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.276 BaseBdev2 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.276 [ 00:12:33.276 { 00:12:33.276 "name": "BaseBdev2", 00:12:33.276 "aliases": [ 00:12:33.276 "66ab87ba-d428-4130-b190-40aff55ac934" 00:12:33.276 ], 00:12:33.276 "product_name": "Malloc disk", 00:12:33.276 "block_size": 512, 00:12:33.276 "num_blocks": 65536, 00:12:33.276 "uuid": "66ab87ba-d428-4130-b190-40aff55ac934", 00:12:33.276 "assigned_rate_limits": { 00:12:33.276 "rw_ios_per_sec": 0, 00:12:33.276 "rw_mbytes_per_sec": 0, 00:12:33.276 "r_mbytes_per_sec": 0, 00:12:33.276 "w_mbytes_per_sec": 0 00:12:33.276 }, 00:12:33.276 "claimed": true, 00:12:33.276 "claim_type": "exclusive_write", 00:12:33.276 "zoned": false, 00:12:33.276 "supported_io_types": { 00:12:33.276 "read": true, 00:12:33.276 "write": true, 00:12:33.276 "unmap": true, 00:12:33.276 "flush": true, 00:12:33.276 "reset": true, 00:12:33.276 "nvme_admin": false, 00:12:33.276 "nvme_io": false, 00:12:33.276 "nvme_io_md": false, 00:12:33.276 "write_zeroes": true, 00:12:33.276 "zcopy": true, 00:12:33.276 "get_zone_info": false, 00:12:33.276 "zone_management": false, 00:12:33.276 "zone_append": false, 00:12:33.276 "compare": false, 00:12:33.276 "compare_and_write": false, 00:12:33.276 "abort": true, 00:12:33.276 "seek_hole": false, 00:12:33.276 "seek_data": false, 00:12:33.276 "copy": true, 00:12:33.276 "nvme_iov_md": false 00:12:33.276 }, 00:12:33.276 "memory_domains": [ 00:12:33.276 { 00:12:33.276 "dma_device_id": "system", 00:12:33.276 "dma_device_type": 1 00:12:33.276 }, 00:12:33.276 { 00:12:33.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.276 "dma_device_type": 2 00:12:33.276 } 00:12:33.276 ], 00:12:33.276 "driver_specific": {} 00:12:33.276 } 00:12:33.276 ] 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.276 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.277 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.277 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.277 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.277 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.277 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.277 "name": "Existed_Raid", 00:12:33.277 "uuid": "484b7e48-36c7-479c-a4c5-c64ddff7d44b", 00:12:33.277 "strip_size_kb": 64, 00:12:33.277 "state": "configuring", 00:12:33.277 "raid_level": "raid0", 00:12:33.277 "superblock": true, 00:12:33.277 "num_base_bdevs": 4, 00:12:33.277 "num_base_bdevs_discovered": 2, 00:12:33.277 "num_base_bdevs_operational": 4, 00:12:33.277 "base_bdevs_list": [ 00:12:33.277 { 00:12:33.277 "name": "BaseBdev1", 00:12:33.277 "uuid": "84b0d28e-ebc8-44aa-97b1-79d5363def05", 00:12:33.277 "is_configured": true, 00:12:33.277 "data_offset": 2048, 00:12:33.277 "data_size": 63488 00:12:33.277 }, 00:12:33.277 { 00:12:33.277 "name": "BaseBdev2", 00:12:33.277 "uuid": "66ab87ba-d428-4130-b190-40aff55ac934", 00:12:33.277 "is_configured": true, 00:12:33.277 "data_offset": 2048, 00:12:33.277 "data_size": 63488 00:12:33.277 }, 00:12:33.277 { 00:12:33.277 "name": "BaseBdev3", 00:12:33.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.277 "is_configured": false, 00:12:33.277 "data_offset": 0, 00:12:33.277 "data_size": 0 00:12:33.277 }, 00:12:33.277 { 00:12:33.277 "name": "BaseBdev4", 00:12:33.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.277 "is_configured": false, 00:12:33.277 "data_offset": 0, 00:12:33.277 "data_size": 0 00:12:33.277 } 00:12:33.277 ] 00:12:33.277 }' 00:12:33.277 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.277 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.537 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:33.537 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.537 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.537 [2024-12-05 20:05:34.957677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.537 BaseBdev3 00:12:33.537 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.537 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:33.537 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:33.537 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.537 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:33.537 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.537 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.537 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.537 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.537 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.537 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.537 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:33.537 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.537 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.797 [ 00:12:33.797 { 00:12:33.797 "name": "BaseBdev3", 00:12:33.797 "aliases": [ 00:12:33.797 "63b66394-c101-483b-99ed-4b8b101fa19e" 00:12:33.797 ], 00:12:33.797 "product_name": "Malloc disk", 00:12:33.797 "block_size": 512, 00:12:33.797 "num_blocks": 65536, 00:12:33.797 "uuid": "63b66394-c101-483b-99ed-4b8b101fa19e", 00:12:33.797 "assigned_rate_limits": { 00:12:33.797 "rw_ios_per_sec": 0, 00:12:33.797 "rw_mbytes_per_sec": 0, 00:12:33.797 "r_mbytes_per_sec": 0, 00:12:33.797 "w_mbytes_per_sec": 0 00:12:33.797 }, 00:12:33.797 "claimed": true, 00:12:33.797 "claim_type": "exclusive_write", 00:12:33.797 "zoned": false, 00:12:33.797 "supported_io_types": { 00:12:33.797 "read": true, 00:12:33.797 "write": true, 00:12:33.797 "unmap": true, 00:12:33.797 "flush": true, 00:12:33.797 "reset": true, 00:12:33.797 "nvme_admin": false, 00:12:33.797 "nvme_io": false, 00:12:33.797 "nvme_io_md": false, 00:12:33.797 "write_zeroes": true, 00:12:33.797 "zcopy": true, 00:12:33.797 "get_zone_info": false, 00:12:33.797 "zone_management": false, 00:12:33.797 "zone_append": false, 00:12:33.797 "compare": false, 00:12:33.797 "compare_and_write": false, 00:12:33.797 "abort": true, 00:12:33.797 "seek_hole": false, 00:12:33.798 "seek_data": false, 00:12:33.798 "copy": true, 00:12:33.798 "nvme_iov_md": false 00:12:33.798 }, 00:12:33.798 "memory_domains": [ 00:12:33.798 { 00:12:33.798 "dma_device_id": "system", 00:12:33.798 "dma_device_type": 1 00:12:33.798 }, 00:12:33.798 { 00:12:33.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.798 "dma_device_type": 2 00:12:33.798 } 00:12:33.798 ], 00:12:33.798 "driver_specific": {} 00:12:33.798 } 00:12:33.798 ] 00:12:33.798 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.798 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:33.798 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:33.798 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.798 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:33.798 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.798 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.798 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:33.798 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.798 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.798 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.798 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.798 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.798 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.798 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.798 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.798 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.798 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.798 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.798 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.798 "name": "Existed_Raid", 00:12:33.798 "uuid": "484b7e48-36c7-479c-a4c5-c64ddff7d44b", 00:12:33.798 "strip_size_kb": 64, 00:12:33.798 "state": "configuring", 00:12:33.798 "raid_level": "raid0", 00:12:33.798 "superblock": true, 00:12:33.798 "num_base_bdevs": 4, 00:12:33.798 "num_base_bdevs_discovered": 3, 00:12:33.798 "num_base_bdevs_operational": 4, 00:12:33.798 "base_bdevs_list": [ 00:12:33.798 { 00:12:33.798 "name": "BaseBdev1", 00:12:33.798 "uuid": "84b0d28e-ebc8-44aa-97b1-79d5363def05", 00:12:33.798 "is_configured": true, 00:12:33.798 "data_offset": 2048, 00:12:33.798 "data_size": 63488 00:12:33.798 }, 00:12:33.798 { 00:12:33.798 "name": "BaseBdev2", 00:12:33.798 "uuid": "66ab87ba-d428-4130-b190-40aff55ac934", 00:12:33.798 "is_configured": true, 00:12:33.798 "data_offset": 2048, 00:12:33.798 "data_size": 63488 00:12:33.798 }, 00:12:33.798 { 00:12:33.798 "name": "BaseBdev3", 00:12:33.798 "uuid": "63b66394-c101-483b-99ed-4b8b101fa19e", 00:12:33.798 "is_configured": true, 00:12:33.798 "data_offset": 2048, 00:12:33.798 "data_size": 63488 00:12:33.798 }, 00:12:33.798 { 00:12:33.798 "name": "BaseBdev4", 00:12:33.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.798 "is_configured": false, 00:12:33.798 "data_offset": 0, 00:12:33.798 "data_size": 0 00:12:33.798 } 00:12:33.798 ] 00:12:33.798 }' 00:12:33.798 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.798 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.058 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:34.058 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.058 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.058 [2024-12-05 20:05:35.488592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:34.058 [2024-12-05 20:05:35.488861] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:34.058 [2024-12-05 20:05:35.488877] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:34.058 [2024-12-05 20:05:35.489245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:34.058 BaseBdev4 00:12:34.058 [2024-12-05 20:05:35.489425] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:34.058 [2024-12-05 20:05:35.489445] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:34.058 [2024-12-05 20:05:35.489608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.058 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.058 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:34.058 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:34.058 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:34.058 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:34.058 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:34.058 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:34.058 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:34.058 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.058 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.317 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.317 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:34.317 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.317 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.317 [ 00:12:34.317 { 00:12:34.317 "name": "BaseBdev4", 00:12:34.317 "aliases": [ 00:12:34.317 "cf4479f8-7c70-4fc3-83a8-9d56dcf897b7" 00:12:34.317 ], 00:12:34.317 "product_name": "Malloc disk", 00:12:34.317 "block_size": 512, 00:12:34.317 "num_blocks": 65536, 00:12:34.317 "uuid": "cf4479f8-7c70-4fc3-83a8-9d56dcf897b7", 00:12:34.317 "assigned_rate_limits": { 00:12:34.317 "rw_ios_per_sec": 0, 00:12:34.317 "rw_mbytes_per_sec": 0, 00:12:34.317 "r_mbytes_per_sec": 0, 00:12:34.317 "w_mbytes_per_sec": 0 00:12:34.317 }, 00:12:34.318 "claimed": true, 00:12:34.318 "claim_type": "exclusive_write", 00:12:34.318 "zoned": false, 00:12:34.318 "supported_io_types": { 00:12:34.318 "read": true, 00:12:34.318 "write": true, 00:12:34.318 "unmap": true, 00:12:34.318 "flush": true, 00:12:34.318 "reset": true, 00:12:34.318 "nvme_admin": false, 00:12:34.318 "nvme_io": false, 00:12:34.318 "nvme_io_md": false, 00:12:34.318 "write_zeroes": true, 00:12:34.318 "zcopy": true, 00:12:34.318 "get_zone_info": false, 00:12:34.318 "zone_management": false, 00:12:34.318 "zone_append": false, 00:12:34.318 "compare": false, 00:12:34.318 "compare_and_write": false, 00:12:34.318 "abort": true, 00:12:34.318 "seek_hole": false, 00:12:34.318 "seek_data": false, 00:12:34.318 "copy": true, 00:12:34.318 "nvme_iov_md": false 00:12:34.318 }, 00:12:34.318 "memory_domains": [ 00:12:34.318 { 00:12:34.318 "dma_device_id": "system", 00:12:34.318 "dma_device_type": 1 00:12:34.318 }, 00:12:34.318 { 00:12:34.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.318 "dma_device_type": 2 00:12:34.318 } 00:12:34.318 ], 00:12:34.318 "driver_specific": {} 00:12:34.318 } 00:12:34.318 ] 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.318 "name": "Existed_Raid", 00:12:34.318 "uuid": "484b7e48-36c7-479c-a4c5-c64ddff7d44b", 00:12:34.318 "strip_size_kb": 64, 00:12:34.318 "state": "online", 00:12:34.318 "raid_level": "raid0", 00:12:34.318 "superblock": true, 00:12:34.318 "num_base_bdevs": 4, 00:12:34.318 "num_base_bdevs_discovered": 4, 00:12:34.318 "num_base_bdevs_operational": 4, 00:12:34.318 "base_bdevs_list": [ 00:12:34.318 { 00:12:34.318 "name": "BaseBdev1", 00:12:34.318 "uuid": "84b0d28e-ebc8-44aa-97b1-79d5363def05", 00:12:34.318 "is_configured": true, 00:12:34.318 "data_offset": 2048, 00:12:34.318 "data_size": 63488 00:12:34.318 }, 00:12:34.318 { 00:12:34.318 "name": "BaseBdev2", 00:12:34.318 "uuid": "66ab87ba-d428-4130-b190-40aff55ac934", 00:12:34.318 "is_configured": true, 00:12:34.318 "data_offset": 2048, 00:12:34.318 "data_size": 63488 00:12:34.318 }, 00:12:34.318 { 00:12:34.318 "name": "BaseBdev3", 00:12:34.318 "uuid": "63b66394-c101-483b-99ed-4b8b101fa19e", 00:12:34.318 "is_configured": true, 00:12:34.318 "data_offset": 2048, 00:12:34.318 "data_size": 63488 00:12:34.318 }, 00:12:34.318 { 00:12:34.318 "name": "BaseBdev4", 00:12:34.318 "uuid": "cf4479f8-7c70-4fc3-83a8-9d56dcf897b7", 00:12:34.318 "is_configured": true, 00:12:34.318 "data_offset": 2048, 00:12:34.318 "data_size": 63488 00:12:34.318 } 00:12:34.318 ] 00:12:34.318 }' 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.318 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.578 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:34.578 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:34.578 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:34.578 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:34.578 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:34.578 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:34.578 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:34.578 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:34.578 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.578 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.578 [2024-12-05 20:05:35.968249] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.578 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.578 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:34.578 "name": "Existed_Raid", 00:12:34.578 "aliases": [ 00:12:34.578 "484b7e48-36c7-479c-a4c5-c64ddff7d44b" 00:12:34.578 ], 00:12:34.578 "product_name": "Raid Volume", 00:12:34.578 "block_size": 512, 00:12:34.578 "num_blocks": 253952, 00:12:34.578 "uuid": "484b7e48-36c7-479c-a4c5-c64ddff7d44b", 00:12:34.578 "assigned_rate_limits": { 00:12:34.578 "rw_ios_per_sec": 0, 00:12:34.578 "rw_mbytes_per_sec": 0, 00:12:34.578 "r_mbytes_per_sec": 0, 00:12:34.578 "w_mbytes_per_sec": 0 00:12:34.578 }, 00:12:34.578 "claimed": false, 00:12:34.578 "zoned": false, 00:12:34.578 "supported_io_types": { 00:12:34.578 "read": true, 00:12:34.578 "write": true, 00:12:34.578 "unmap": true, 00:12:34.578 "flush": true, 00:12:34.578 "reset": true, 00:12:34.578 "nvme_admin": false, 00:12:34.578 "nvme_io": false, 00:12:34.578 "nvme_io_md": false, 00:12:34.578 "write_zeroes": true, 00:12:34.578 "zcopy": false, 00:12:34.578 "get_zone_info": false, 00:12:34.578 "zone_management": false, 00:12:34.578 "zone_append": false, 00:12:34.578 "compare": false, 00:12:34.578 "compare_and_write": false, 00:12:34.578 "abort": false, 00:12:34.578 "seek_hole": false, 00:12:34.578 "seek_data": false, 00:12:34.578 "copy": false, 00:12:34.578 "nvme_iov_md": false 00:12:34.578 }, 00:12:34.578 "memory_domains": [ 00:12:34.578 { 00:12:34.578 "dma_device_id": "system", 00:12:34.578 "dma_device_type": 1 00:12:34.578 }, 00:12:34.578 { 00:12:34.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.578 "dma_device_type": 2 00:12:34.578 }, 00:12:34.578 { 00:12:34.578 "dma_device_id": "system", 00:12:34.578 "dma_device_type": 1 00:12:34.578 }, 00:12:34.578 { 00:12:34.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.578 "dma_device_type": 2 00:12:34.578 }, 00:12:34.578 { 00:12:34.578 "dma_device_id": "system", 00:12:34.578 "dma_device_type": 1 00:12:34.578 }, 00:12:34.578 { 00:12:34.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.578 "dma_device_type": 2 00:12:34.578 }, 00:12:34.579 { 00:12:34.579 "dma_device_id": "system", 00:12:34.579 "dma_device_type": 1 00:12:34.579 }, 00:12:34.579 { 00:12:34.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.579 "dma_device_type": 2 00:12:34.579 } 00:12:34.579 ], 00:12:34.579 "driver_specific": { 00:12:34.579 "raid": { 00:12:34.579 "uuid": "484b7e48-36c7-479c-a4c5-c64ddff7d44b", 00:12:34.579 "strip_size_kb": 64, 00:12:34.579 "state": "online", 00:12:34.579 "raid_level": "raid0", 00:12:34.579 "superblock": true, 00:12:34.579 "num_base_bdevs": 4, 00:12:34.579 "num_base_bdevs_discovered": 4, 00:12:34.579 "num_base_bdevs_operational": 4, 00:12:34.579 "base_bdevs_list": [ 00:12:34.579 { 00:12:34.579 "name": "BaseBdev1", 00:12:34.579 "uuid": "84b0d28e-ebc8-44aa-97b1-79d5363def05", 00:12:34.579 "is_configured": true, 00:12:34.579 "data_offset": 2048, 00:12:34.579 "data_size": 63488 00:12:34.579 }, 00:12:34.579 { 00:12:34.579 "name": "BaseBdev2", 00:12:34.579 "uuid": "66ab87ba-d428-4130-b190-40aff55ac934", 00:12:34.579 "is_configured": true, 00:12:34.579 "data_offset": 2048, 00:12:34.579 "data_size": 63488 00:12:34.579 }, 00:12:34.579 { 00:12:34.579 "name": "BaseBdev3", 00:12:34.579 "uuid": "63b66394-c101-483b-99ed-4b8b101fa19e", 00:12:34.579 "is_configured": true, 00:12:34.579 "data_offset": 2048, 00:12:34.579 "data_size": 63488 00:12:34.579 }, 00:12:34.579 { 00:12:34.579 "name": "BaseBdev4", 00:12:34.579 "uuid": "cf4479f8-7c70-4fc3-83a8-9d56dcf897b7", 00:12:34.579 "is_configured": true, 00:12:34.579 "data_offset": 2048, 00:12:34.579 "data_size": 63488 00:12:34.579 } 00:12:34.579 ] 00:12:34.579 } 00:12:34.579 } 00:12:34.579 }' 00:12:34.579 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:34.839 BaseBdev2 00:12:34.839 BaseBdev3 00:12:34.839 BaseBdev4' 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.839 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.099 [2024-12-05 20:05:36.275436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:35.099 [2024-12-05 20:05:36.275524] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.099 [2024-12-05 20:05:36.275611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.099 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.100 "name": "Existed_Raid", 00:12:35.100 "uuid": "484b7e48-36c7-479c-a4c5-c64ddff7d44b", 00:12:35.100 "strip_size_kb": 64, 00:12:35.100 "state": "offline", 00:12:35.100 "raid_level": "raid0", 00:12:35.100 "superblock": true, 00:12:35.100 "num_base_bdevs": 4, 00:12:35.100 "num_base_bdevs_discovered": 3, 00:12:35.100 "num_base_bdevs_operational": 3, 00:12:35.100 "base_bdevs_list": [ 00:12:35.100 { 00:12:35.100 "name": null, 00:12:35.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.100 "is_configured": false, 00:12:35.100 "data_offset": 0, 00:12:35.100 "data_size": 63488 00:12:35.100 }, 00:12:35.100 { 00:12:35.100 "name": "BaseBdev2", 00:12:35.100 "uuid": "66ab87ba-d428-4130-b190-40aff55ac934", 00:12:35.100 "is_configured": true, 00:12:35.100 "data_offset": 2048, 00:12:35.100 "data_size": 63488 00:12:35.100 }, 00:12:35.100 { 00:12:35.100 "name": "BaseBdev3", 00:12:35.100 "uuid": "63b66394-c101-483b-99ed-4b8b101fa19e", 00:12:35.100 "is_configured": true, 00:12:35.100 "data_offset": 2048, 00:12:35.100 "data_size": 63488 00:12:35.100 }, 00:12:35.100 { 00:12:35.100 "name": "BaseBdev4", 00:12:35.100 "uuid": "cf4479f8-7c70-4fc3-83a8-9d56dcf897b7", 00:12:35.100 "is_configured": true, 00:12:35.100 "data_offset": 2048, 00:12:35.100 "data_size": 63488 00:12:35.100 } 00:12:35.100 ] 00:12:35.100 }' 00:12:35.100 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.100 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.669 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:35.669 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.669 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.669 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.669 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.669 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:35.669 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.670 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:35.670 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.670 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:35.670 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.670 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.670 [2024-12-05 20:05:36.852516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:35.670 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.670 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:35.670 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.670 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.670 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:35.670 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.670 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.670 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.670 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:35.670 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.670 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:35.670 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.670 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.670 [2024-12-05 20:05:37.005429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:35.670 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.670 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:35.670 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.930 [2024-12-05 20:05:37.158908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:35.930 [2024-12-05 20:05:37.159015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.930 BaseBdev2 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.930 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.190 [ 00:12:36.190 { 00:12:36.190 "name": "BaseBdev2", 00:12:36.190 "aliases": [ 00:12:36.190 "3390c7bb-2ca5-446e-9746-dc5d621b6884" 00:12:36.190 ], 00:12:36.190 "product_name": "Malloc disk", 00:12:36.190 "block_size": 512, 00:12:36.190 "num_blocks": 65536, 00:12:36.190 "uuid": "3390c7bb-2ca5-446e-9746-dc5d621b6884", 00:12:36.190 "assigned_rate_limits": { 00:12:36.190 "rw_ios_per_sec": 0, 00:12:36.190 "rw_mbytes_per_sec": 0, 00:12:36.190 "r_mbytes_per_sec": 0, 00:12:36.190 "w_mbytes_per_sec": 0 00:12:36.190 }, 00:12:36.190 "claimed": false, 00:12:36.190 "zoned": false, 00:12:36.190 "supported_io_types": { 00:12:36.190 "read": true, 00:12:36.190 "write": true, 00:12:36.190 "unmap": true, 00:12:36.190 "flush": true, 00:12:36.191 "reset": true, 00:12:36.191 "nvme_admin": false, 00:12:36.191 "nvme_io": false, 00:12:36.191 "nvme_io_md": false, 00:12:36.191 "write_zeroes": true, 00:12:36.191 "zcopy": true, 00:12:36.191 "get_zone_info": false, 00:12:36.191 "zone_management": false, 00:12:36.191 "zone_append": false, 00:12:36.191 "compare": false, 00:12:36.191 "compare_and_write": false, 00:12:36.191 "abort": true, 00:12:36.191 "seek_hole": false, 00:12:36.191 "seek_data": false, 00:12:36.191 "copy": true, 00:12:36.191 "nvme_iov_md": false 00:12:36.191 }, 00:12:36.191 "memory_domains": [ 00:12:36.191 { 00:12:36.191 "dma_device_id": "system", 00:12:36.191 "dma_device_type": 1 00:12:36.191 }, 00:12:36.191 { 00:12:36.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.191 "dma_device_type": 2 00:12:36.191 } 00:12:36.191 ], 00:12:36.191 "driver_specific": {} 00:12:36.191 } 00:12:36.191 ] 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.191 BaseBdev3 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.191 [ 00:12:36.191 { 00:12:36.191 "name": "BaseBdev3", 00:12:36.191 "aliases": [ 00:12:36.191 "10a45057-a340-487c-83f7-55087385b7a8" 00:12:36.191 ], 00:12:36.191 "product_name": "Malloc disk", 00:12:36.191 "block_size": 512, 00:12:36.191 "num_blocks": 65536, 00:12:36.191 "uuid": "10a45057-a340-487c-83f7-55087385b7a8", 00:12:36.191 "assigned_rate_limits": { 00:12:36.191 "rw_ios_per_sec": 0, 00:12:36.191 "rw_mbytes_per_sec": 0, 00:12:36.191 "r_mbytes_per_sec": 0, 00:12:36.191 "w_mbytes_per_sec": 0 00:12:36.191 }, 00:12:36.191 "claimed": false, 00:12:36.191 "zoned": false, 00:12:36.191 "supported_io_types": { 00:12:36.191 "read": true, 00:12:36.191 "write": true, 00:12:36.191 "unmap": true, 00:12:36.191 "flush": true, 00:12:36.191 "reset": true, 00:12:36.191 "nvme_admin": false, 00:12:36.191 "nvme_io": false, 00:12:36.191 "nvme_io_md": false, 00:12:36.191 "write_zeroes": true, 00:12:36.191 "zcopy": true, 00:12:36.191 "get_zone_info": false, 00:12:36.191 "zone_management": false, 00:12:36.191 "zone_append": false, 00:12:36.191 "compare": false, 00:12:36.191 "compare_and_write": false, 00:12:36.191 "abort": true, 00:12:36.191 "seek_hole": false, 00:12:36.191 "seek_data": false, 00:12:36.191 "copy": true, 00:12:36.191 "nvme_iov_md": false 00:12:36.191 }, 00:12:36.191 "memory_domains": [ 00:12:36.191 { 00:12:36.191 "dma_device_id": "system", 00:12:36.191 "dma_device_type": 1 00:12:36.191 }, 00:12:36.191 { 00:12:36.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.191 "dma_device_type": 2 00:12:36.191 } 00:12:36.191 ], 00:12:36.191 "driver_specific": {} 00:12:36.191 } 00:12:36.191 ] 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.191 BaseBdev4 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.191 [ 00:12:36.191 { 00:12:36.191 "name": "BaseBdev4", 00:12:36.191 "aliases": [ 00:12:36.191 "3cc1f9d9-ba8e-4a10-8409-c76c8d26a71c" 00:12:36.191 ], 00:12:36.191 "product_name": "Malloc disk", 00:12:36.191 "block_size": 512, 00:12:36.191 "num_blocks": 65536, 00:12:36.191 "uuid": "3cc1f9d9-ba8e-4a10-8409-c76c8d26a71c", 00:12:36.191 "assigned_rate_limits": { 00:12:36.191 "rw_ios_per_sec": 0, 00:12:36.191 "rw_mbytes_per_sec": 0, 00:12:36.191 "r_mbytes_per_sec": 0, 00:12:36.191 "w_mbytes_per_sec": 0 00:12:36.191 }, 00:12:36.191 "claimed": false, 00:12:36.191 "zoned": false, 00:12:36.191 "supported_io_types": { 00:12:36.191 "read": true, 00:12:36.191 "write": true, 00:12:36.191 "unmap": true, 00:12:36.191 "flush": true, 00:12:36.191 "reset": true, 00:12:36.191 "nvme_admin": false, 00:12:36.191 "nvme_io": false, 00:12:36.191 "nvme_io_md": false, 00:12:36.191 "write_zeroes": true, 00:12:36.191 "zcopy": true, 00:12:36.191 "get_zone_info": false, 00:12:36.191 "zone_management": false, 00:12:36.191 "zone_append": false, 00:12:36.191 "compare": false, 00:12:36.191 "compare_and_write": false, 00:12:36.191 "abort": true, 00:12:36.191 "seek_hole": false, 00:12:36.191 "seek_data": false, 00:12:36.191 "copy": true, 00:12:36.191 "nvme_iov_md": false 00:12:36.191 }, 00:12:36.191 "memory_domains": [ 00:12:36.191 { 00:12:36.191 "dma_device_id": "system", 00:12:36.191 "dma_device_type": 1 00:12:36.191 }, 00:12:36.191 { 00:12:36.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.191 "dma_device_type": 2 00:12:36.191 } 00:12:36.191 ], 00:12:36.191 "driver_specific": {} 00:12:36.191 } 00:12:36.191 ] 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:36.191 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.192 [2024-12-05 20:05:37.558558] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:36.192 [2024-12-05 20:05:37.558659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:36.192 [2024-12-05 20:05:37.558704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.192 [2024-12-05 20:05:37.560563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:36.192 [2024-12-05 20:05:37.560657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.192 "name": "Existed_Raid", 00:12:36.192 "uuid": "8dbc6bfb-f44f-4666-8fca-7cf6f869c8bb", 00:12:36.192 "strip_size_kb": 64, 00:12:36.192 "state": "configuring", 00:12:36.192 "raid_level": "raid0", 00:12:36.192 "superblock": true, 00:12:36.192 "num_base_bdevs": 4, 00:12:36.192 "num_base_bdevs_discovered": 3, 00:12:36.192 "num_base_bdevs_operational": 4, 00:12:36.192 "base_bdevs_list": [ 00:12:36.192 { 00:12:36.192 "name": "BaseBdev1", 00:12:36.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.192 "is_configured": false, 00:12:36.192 "data_offset": 0, 00:12:36.192 "data_size": 0 00:12:36.192 }, 00:12:36.192 { 00:12:36.192 "name": "BaseBdev2", 00:12:36.192 "uuid": "3390c7bb-2ca5-446e-9746-dc5d621b6884", 00:12:36.192 "is_configured": true, 00:12:36.192 "data_offset": 2048, 00:12:36.192 "data_size": 63488 00:12:36.192 }, 00:12:36.192 { 00:12:36.192 "name": "BaseBdev3", 00:12:36.192 "uuid": "10a45057-a340-487c-83f7-55087385b7a8", 00:12:36.192 "is_configured": true, 00:12:36.192 "data_offset": 2048, 00:12:36.192 "data_size": 63488 00:12:36.192 }, 00:12:36.192 { 00:12:36.192 "name": "BaseBdev4", 00:12:36.192 "uuid": "3cc1f9d9-ba8e-4a10-8409-c76c8d26a71c", 00:12:36.192 "is_configured": true, 00:12:36.192 "data_offset": 2048, 00:12:36.192 "data_size": 63488 00:12:36.192 } 00:12:36.192 ] 00:12:36.192 }' 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.192 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.769 [2024-12-05 20:05:38.005912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.769 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.769 "name": "Existed_Raid", 00:12:36.769 "uuid": "8dbc6bfb-f44f-4666-8fca-7cf6f869c8bb", 00:12:36.769 "strip_size_kb": 64, 00:12:36.769 "state": "configuring", 00:12:36.769 "raid_level": "raid0", 00:12:36.769 "superblock": true, 00:12:36.769 "num_base_bdevs": 4, 00:12:36.769 "num_base_bdevs_discovered": 2, 00:12:36.769 "num_base_bdevs_operational": 4, 00:12:36.769 "base_bdevs_list": [ 00:12:36.769 { 00:12:36.769 "name": "BaseBdev1", 00:12:36.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.769 "is_configured": false, 00:12:36.769 "data_offset": 0, 00:12:36.769 "data_size": 0 00:12:36.769 }, 00:12:36.769 { 00:12:36.769 "name": null, 00:12:36.769 "uuid": "3390c7bb-2ca5-446e-9746-dc5d621b6884", 00:12:36.769 "is_configured": false, 00:12:36.769 "data_offset": 0, 00:12:36.769 "data_size": 63488 00:12:36.769 }, 00:12:36.769 { 00:12:36.769 "name": "BaseBdev3", 00:12:36.769 "uuid": "10a45057-a340-487c-83f7-55087385b7a8", 00:12:36.769 "is_configured": true, 00:12:36.770 "data_offset": 2048, 00:12:36.770 "data_size": 63488 00:12:36.770 }, 00:12:36.770 { 00:12:36.770 "name": "BaseBdev4", 00:12:36.770 "uuid": "3cc1f9d9-ba8e-4a10-8409-c76c8d26a71c", 00:12:36.770 "is_configured": true, 00:12:36.770 "data_offset": 2048, 00:12:36.770 "data_size": 63488 00:12:36.770 } 00:12:36.770 ] 00:12:36.770 }' 00:12:36.770 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.770 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.044 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.044 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:37.044 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.044 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.315 [2024-12-05 20:05:38.545964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.315 BaseBdev1 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.315 [ 00:12:37.315 { 00:12:37.315 "name": "BaseBdev1", 00:12:37.315 "aliases": [ 00:12:37.315 "138d0cc3-7423-46da-a0bc-33a346b2938f" 00:12:37.315 ], 00:12:37.315 "product_name": "Malloc disk", 00:12:37.315 "block_size": 512, 00:12:37.315 "num_blocks": 65536, 00:12:37.315 "uuid": "138d0cc3-7423-46da-a0bc-33a346b2938f", 00:12:37.315 "assigned_rate_limits": { 00:12:37.315 "rw_ios_per_sec": 0, 00:12:37.315 "rw_mbytes_per_sec": 0, 00:12:37.315 "r_mbytes_per_sec": 0, 00:12:37.315 "w_mbytes_per_sec": 0 00:12:37.315 }, 00:12:37.315 "claimed": true, 00:12:37.315 "claim_type": "exclusive_write", 00:12:37.315 "zoned": false, 00:12:37.315 "supported_io_types": { 00:12:37.315 "read": true, 00:12:37.315 "write": true, 00:12:37.315 "unmap": true, 00:12:37.315 "flush": true, 00:12:37.315 "reset": true, 00:12:37.315 "nvme_admin": false, 00:12:37.315 "nvme_io": false, 00:12:37.315 "nvme_io_md": false, 00:12:37.315 "write_zeroes": true, 00:12:37.315 "zcopy": true, 00:12:37.315 "get_zone_info": false, 00:12:37.315 "zone_management": false, 00:12:37.315 "zone_append": false, 00:12:37.315 "compare": false, 00:12:37.315 "compare_and_write": false, 00:12:37.315 "abort": true, 00:12:37.315 "seek_hole": false, 00:12:37.315 "seek_data": false, 00:12:37.315 "copy": true, 00:12:37.315 "nvme_iov_md": false 00:12:37.315 }, 00:12:37.315 "memory_domains": [ 00:12:37.315 { 00:12:37.315 "dma_device_id": "system", 00:12:37.315 "dma_device_type": 1 00:12:37.315 }, 00:12:37.315 { 00:12:37.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.315 "dma_device_type": 2 00:12:37.315 } 00:12:37.315 ], 00:12:37.315 "driver_specific": {} 00:12:37.315 } 00:12:37.315 ] 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:37.315 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.316 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.316 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.316 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.316 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.316 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.316 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.316 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.316 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.316 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.316 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.316 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.316 "name": "Existed_Raid", 00:12:37.316 "uuid": "8dbc6bfb-f44f-4666-8fca-7cf6f869c8bb", 00:12:37.316 "strip_size_kb": 64, 00:12:37.316 "state": "configuring", 00:12:37.316 "raid_level": "raid0", 00:12:37.316 "superblock": true, 00:12:37.316 "num_base_bdevs": 4, 00:12:37.316 "num_base_bdevs_discovered": 3, 00:12:37.316 "num_base_bdevs_operational": 4, 00:12:37.316 "base_bdevs_list": [ 00:12:37.316 { 00:12:37.316 "name": "BaseBdev1", 00:12:37.316 "uuid": "138d0cc3-7423-46da-a0bc-33a346b2938f", 00:12:37.316 "is_configured": true, 00:12:37.316 "data_offset": 2048, 00:12:37.316 "data_size": 63488 00:12:37.316 }, 00:12:37.316 { 00:12:37.316 "name": null, 00:12:37.316 "uuid": "3390c7bb-2ca5-446e-9746-dc5d621b6884", 00:12:37.316 "is_configured": false, 00:12:37.316 "data_offset": 0, 00:12:37.316 "data_size": 63488 00:12:37.316 }, 00:12:37.316 { 00:12:37.316 "name": "BaseBdev3", 00:12:37.316 "uuid": "10a45057-a340-487c-83f7-55087385b7a8", 00:12:37.316 "is_configured": true, 00:12:37.316 "data_offset": 2048, 00:12:37.316 "data_size": 63488 00:12:37.316 }, 00:12:37.316 { 00:12:37.316 "name": "BaseBdev4", 00:12:37.316 "uuid": "3cc1f9d9-ba8e-4a10-8409-c76c8d26a71c", 00:12:37.316 "is_configured": true, 00:12:37.316 "data_offset": 2048, 00:12:37.316 "data_size": 63488 00:12:37.316 } 00:12:37.316 ] 00:12:37.316 }' 00:12:37.316 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.316 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.884 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.884 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.884 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.884 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:37.884 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.884 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:37.884 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:37.884 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.884 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.884 [2024-12-05 20:05:39.077095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.885 "name": "Existed_Raid", 00:12:37.885 "uuid": "8dbc6bfb-f44f-4666-8fca-7cf6f869c8bb", 00:12:37.885 "strip_size_kb": 64, 00:12:37.885 "state": "configuring", 00:12:37.885 "raid_level": "raid0", 00:12:37.885 "superblock": true, 00:12:37.885 "num_base_bdevs": 4, 00:12:37.885 "num_base_bdevs_discovered": 2, 00:12:37.885 "num_base_bdevs_operational": 4, 00:12:37.885 "base_bdevs_list": [ 00:12:37.885 { 00:12:37.885 "name": "BaseBdev1", 00:12:37.885 "uuid": "138d0cc3-7423-46da-a0bc-33a346b2938f", 00:12:37.885 "is_configured": true, 00:12:37.885 "data_offset": 2048, 00:12:37.885 "data_size": 63488 00:12:37.885 }, 00:12:37.885 { 00:12:37.885 "name": null, 00:12:37.885 "uuid": "3390c7bb-2ca5-446e-9746-dc5d621b6884", 00:12:37.885 "is_configured": false, 00:12:37.885 "data_offset": 0, 00:12:37.885 "data_size": 63488 00:12:37.885 }, 00:12:37.885 { 00:12:37.885 "name": null, 00:12:37.885 "uuid": "10a45057-a340-487c-83f7-55087385b7a8", 00:12:37.885 "is_configured": false, 00:12:37.885 "data_offset": 0, 00:12:37.885 "data_size": 63488 00:12:37.885 }, 00:12:37.885 { 00:12:37.885 "name": "BaseBdev4", 00:12:37.885 "uuid": "3cc1f9d9-ba8e-4a10-8409-c76c8d26a71c", 00:12:37.885 "is_configured": true, 00:12:37.885 "data_offset": 2048, 00:12:37.885 "data_size": 63488 00:12:37.885 } 00:12:37.885 ] 00:12:37.885 }' 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.885 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.143 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.143 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.143 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:38.143 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.401 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.401 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:38.401 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:38.401 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.401 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.401 [2024-12-05 20:05:39.624316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.401 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.401 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:38.401 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.401 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.401 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:38.401 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.402 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.402 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.402 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.402 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.402 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.402 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.402 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.402 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.402 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.402 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.402 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.402 "name": "Existed_Raid", 00:12:38.402 "uuid": "8dbc6bfb-f44f-4666-8fca-7cf6f869c8bb", 00:12:38.402 "strip_size_kb": 64, 00:12:38.402 "state": "configuring", 00:12:38.402 "raid_level": "raid0", 00:12:38.402 "superblock": true, 00:12:38.402 "num_base_bdevs": 4, 00:12:38.402 "num_base_bdevs_discovered": 3, 00:12:38.402 "num_base_bdevs_operational": 4, 00:12:38.402 "base_bdevs_list": [ 00:12:38.402 { 00:12:38.402 "name": "BaseBdev1", 00:12:38.402 "uuid": "138d0cc3-7423-46da-a0bc-33a346b2938f", 00:12:38.402 "is_configured": true, 00:12:38.402 "data_offset": 2048, 00:12:38.402 "data_size": 63488 00:12:38.402 }, 00:12:38.402 { 00:12:38.402 "name": null, 00:12:38.402 "uuid": "3390c7bb-2ca5-446e-9746-dc5d621b6884", 00:12:38.402 "is_configured": false, 00:12:38.402 "data_offset": 0, 00:12:38.402 "data_size": 63488 00:12:38.402 }, 00:12:38.402 { 00:12:38.402 "name": "BaseBdev3", 00:12:38.402 "uuid": "10a45057-a340-487c-83f7-55087385b7a8", 00:12:38.402 "is_configured": true, 00:12:38.402 "data_offset": 2048, 00:12:38.402 "data_size": 63488 00:12:38.402 }, 00:12:38.402 { 00:12:38.402 "name": "BaseBdev4", 00:12:38.402 "uuid": "3cc1f9d9-ba8e-4a10-8409-c76c8d26a71c", 00:12:38.402 "is_configured": true, 00:12:38.402 "data_offset": 2048, 00:12:38.402 "data_size": 63488 00:12:38.402 } 00:12:38.402 ] 00:12:38.402 }' 00:12:38.402 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.402 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.662 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:38.662 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.662 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.662 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.662 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.922 [2024-12-05 20:05:40.119529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.922 "name": "Existed_Raid", 00:12:38.922 "uuid": "8dbc6bfb-f44f-4666-8fca-7cf6f869c8bb", 00:12:38.922 "strip_size_kb": 64, 00:12:38.922 "state": "configuring", 00:12:38.922 "raid_level": "raid0", 00:12:38.922 "superblock": true, 00:12:38.922 "num_base_bdevs": 4, 00:12:38.922 "num_base_bdevs_discovered": 2, 00:12:38.922 "num_base_bdevs_operational": 4, 00:12:38.922 "base_bdevs_list": [ 00:12:38.922 { 00:12:38.922 "name": null, 00:12:38.922 "uuid": "138d0cc3-7423-46da-a0bc-33a346b2938f", 00:12:38.922 "is_configured": false, 00:12:38.922 "data_offset": 0, 00:12:38.922 "data_size": 63488 00:12:38.922 }, 00:12:38.922 { 00:12:38.922 "name": null, 00:12:38.922 "uuid": "3390c7bb-2ca5-446e-9746-dc5d621b6884", 00:12:38.922 "is_configured": false, 00:12:38.922 "data_offset": 0, 00:12:38.922 "data_size": 63488 00:12:38.922 }, 00:12:38.922 { 00:12:38.922 "name": "BaseBdev3", 00:12:38.922 "uuid": "10a45057-a340-487c-83f7-55087385b7a8", 00:12:38.922 "is_configured": true, 00:12:38.922 "data_offset": 2048, 00:12:38.922 "data_size": 63488 00:12:38.922 }, 00:12:38.922 { 00:12:38.922 "name": "BaseBdev4", 00:12:38.922 "uuid": "3cc1f9d9-ba8e-4a10-8409-c76c8d26a71c", 00:12:38.922 "is_configured": true, 00:12:38.922 "data_offset": 2048, 00:12:38.922 "data_size": 63488 00:12:38.922 } 00:12:38.922 ] 00:12:38.922 }' 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.922 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.491 [2024-12-05 20:05:40.694421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.491 "name": "Existed_Raid", 00:12:39.491 "uuid": "8dbc6bfb-f44f-4666-8fca-7cf6f869c8bb", 00:12:39.491 "strip_size_kb": 64, 00:12:39.491 "state": "configuring", 00:12:39.491 "raid_level": "raid0", 00:12:39.491 "superblock": true, 00:12:39.491 "num_base_bdevs": 4, 00:12:39.491 "num_base_bdevs_discovered": 3, 00:12:39.491 "num_base_bdevs_operational": 4, 00:12:39.491 "base_bdevs_list": [ 00:12:39.491 { 00:12:39.491 "name": null, 00:12:39.491 "uuid": "138d0cc3-7423-46da-a0bc-33a346b2938f", 00:12:39.491 "is_configured": false, 00:12:39.491 "data_offset": 0, 00:12:39.491 "data_size": 63488 00:12:39.491 }, 00:12:39.491 { 00:12:39.491 "name": "BaseBdev2", 00:12:39.491 "uuid": "3390c7bb-2ca5-446e-9746-dc5d621b6884", 00:12:39.491 "is_configured": true, 00:12:39.491 "data_offset": 2048, 00:12:39.491 "data_size": 63488 00:12:39.491 }, 00:12:39.491 { 00:12:39.491 "name": "BaseBdev3", 00:12:39.491 "uuid": "10a45057-a340-487c-83f7-55087385b7a8", 00:12:39.491 "is_configured": true, 00:12:39.491 "data_offset": 2048, 00:12:39.491 "data_size": 63488 00:12:39.491 }, 00:12:39.491 { 00:12:39.491 "name": "BaseBdev4", 00:12:39.491 "uuid": "3cc1f9d9-ba8e-4a10-8409-c76c8d26a71c", 00:12:39.491 "is_configured": true, 00:12:39.491 "data_offset": 2048, 00:12:39.491 "data_size": 63488 00:12:39.491 } 00:12:39.491 ] 00:12:39.491 }' 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.491 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.751 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.751 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.751 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.751 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:39.751 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.011 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:40.011 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.011 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.011 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.011 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 138d0cc3-7423-46da-a0bc-33a346b2938f 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.012 [2024-12-05 20:05:41.292091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:40.012 [2024-12-05 20:05:41.292355] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:40.012 [2024-12-05 20:05:41.292370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:40.012 [2024-12-05 20:05:41.292645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:40.012 [2024-12-05 20:05:41.292794] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:40.012 [2024-12-05 20:05:41.292805] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:40.012 [2024-12-05 20:05:41.292967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.012 NewBaseBdev 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.012 [ 00:12:40.012 { 00:12:40.012 "name": "NewBaseBdev", 00:12:40.012 "aliases": [ 00:12:40.012 "138d0cc3-7423-46da-a0bc-33a346b2938f" 00:12:40.012 ], 00:12:40.012 "product_name": "Malloc disk", 00:12:40.012 "block_size": 512, 00:12:40.012 "num_blocks": 65536, 00:12:40.012 "uuid": "138d0cc3-7423-46da-a0bc-33a346b2938f", 00:12:40.012 "assigned_rate_limits": { 00:12:40.012 "rw_ios_per_sec": 0, 00:12:40.012 "rw_mbytes_per_sec": 0, 00:12:40.012 "r_mbytes_per_sec": 0, 00:12:40.012 "w_mbytes_per_sec": 0 00:12:40.012 }, 00:12:40.012 "claimed": true, 00:12:40.012 "claim_type": "exclusive_write", 00:12:40.012 "zoned": false, 00:12:40.012 "supported_io_types": { 00:12:40.012 "read": true, 00:12:40.012 "write": true, 00:12:40.012 "unmap": true, 00:12:40.012 "flush": true, 00:12:40.012 "reset": true, 00:12:40.012 "nvme_admin": false, 00:12:40.012 "nvme_io": false, 00:12:40.012 "nvme_io_md": false, 00:12:40.012 "write_zeroes": true, 00:12:40.012 "zcopy": true, 00:12:40.012 "get_zone_info": false, 00:12:40.012 "zone_management": false, 00:12:40.012 "zone_append": false, 00:12:40.012 "compare": false, 00:12:40.012 "compare_and_write": false, 00:12:40.012 "abort": true, 00:12:40.012 "seek_hole": false, 00:12:40.012 "seek_data": false, 00:12:40.012 "copy": true, 00:12:40.012 "nvme_iov_md": false 00:12:40.012 }, 00:12:40.012 "memory_domains": [ 00:12:40.012 { 00:12:40.012 "dma_device_id": "system", 00:12:40.012 "dma_device_type": 1 00:12:40.012 }, 00:12:40.012 { 00:12:40.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.012 "dma_device_type": 2 00:12:40.012 } 00:12:40.012 ], 00:12:40.012 "driver_specific": {} 00:12:40.012 } 00:12:40.012 ] 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.012 "name": "Existed_Raid", 00:12:40.012 "uuid": "8dbc6bfb-f44f-4666-8fca-7cf6f869c8bb", 00:12:40.012 "strip_size_kb": 64, 00:12:40.012 "state": "online", 00:12:40.012 "raid_level": "raid0", 00:12:40.012 "superblock": true, 00:12:40.012 "num_base_bdevs": 4, 00:12:40.012 "num_base_bdevs_discovered": 4, 00:12:40.012 "num_base_bdevs_operational": 4, 00:12:40.012 "base_bdevs_list": [ 00:12:40.012 { 00:12:40.012 "name": "NewBaseBdev", 00:12:40.012 "uuid": "138d0cc3-7423-46da-a0bc-33a346b2938f", 00:12:40.012 "is_configured": true, 00:12:40.012 "data_offset": 2048, 00:12:40.012 "data_size": 63488 00:12:40.012 }, 00:12:40.012 { 00:12:40.012 "name": "BaseBdev2", 00:12:40.012 "uuid": "3390c7bb-2ca5-446e-9746-dc5d621b6884", 00:12:40.012 "is_configured": true, 00:12:40.012 "data_offset": 2048, 00:12:40.012 "data_size": 63488 00:12:40.012 }, 00:12:40.012 { 00:12:40.012 "name": "BaseBdev3", 00:12:40.012 "uuid": "10a45057-a340-487c-83f7-55087385b7a8", 00:12:40.012 "is_configured": true, 00:12:40.012 "data_offset": 2048, 00:12:40.012 "data_size": 63488 00:12:40.012 }, 00:12:40.012 { 00:12:40.012 "name": "BaseBdev4", 00:12:40.012 "uuid": "3cc1f9d9-ba8e-4a10-8409-c76c8d26a71c", 00:12:40.012 "is_configured": true, 00:12:40.012 "data_offset": 2048, 00:12:40.012 "data_size": 63488 00:12:40.012 } 00:12:40.012 ] 00:12:40.012 }' 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.012 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.581 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:40.581 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:40.581 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:40.581 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:40.581 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:40.581 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:40.581 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:40.581 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:40.581 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.581 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.581 [2024-12-05 20:05:41.803637] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.581 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.581 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:40.581 "name": "Existed_Raid", 00:12:40.581 "aliases": [ 00:12:40.581 "8dbc6bfb-f44f-4666-8fca-7cf6f869c8bb" 00:12:40.581 ], 00:12:40.581 "product_name": "Raid Volume", 00:12:40.581 "block_size": 512, 00:12:40.581 "num_blocks": 253952, 00:12:40.581 "uuid": "8dbc6bfb-f44f-4666-8fca-7cf6f869c8bb", 00:12:40.581 "assigned_rate_limits": { 00:12:40.581 "rw_ios_per_sec": 0, 00:12:40.581 "rw_mbytes_per_sec": 0, 00:12:40.581 "r_mbytes_per_sec": 0, 00:12:40.581 "w_mbytes_per_sec": 0 00:12:40.581 }, 00:12:40.581 "claimed": false, 00:12:40.581 "zoned": false, 00:12:40.581 "supported_io_types": { 00:12:40.581 "read": true, 00:12:40.581 "write": true, 00:12:40.581 "unmap": true, 00:12:40.581 "flush": true, 00:12:40.581 "reset": true, 00:12:40.581 "nvme_admin": false, 00:12:40.581 "nvme_io": false, 00:12:40.581 "nvme_io_md": false, 00:12:40.581 "write_zeroes": true, 00:12:40.581 "zcopy": false, 00:12:40.581 "get_zone_info": false, 00:12:40.581 "zone_management": false, 00:12:40.581 "zone_append": false, 00:12:40.581 "compare": false, 00:12:40.581 "compare_and_write": false, 00:12:40.581 "abort": false, 00:12:40.581 "seek_hole": false, 00:12:40.581 "seek_data": false, 00:12:40.581 "copy": false, 00:12:40.581 "nvme_iov_md": false 00:12:40.581 }, 00:12:40.581 "memory_domains": [ 00:12:40.581 { 00:12:40.581 "dma_device_id": "system", 00:12:40.581 "dma_device_type": 1 00:12:40.581 }, 00:12:40.581 { 00:12:40.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.581 "dma_device_type": 2 00:12:40.581 }, 00:12:40.581 { 00:12:40.581 "dma_device_id": "system", 00:12:40.581 "dma_device_type": 1 00:12:40.581 }, 00:12:40.581 { 00:12:40.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.581 "dma_device_type": 2 00:12:40.581 }, 00:12:40.581 { 00:12:40.581 "dma_device_id": "system", 00:12:40.581 "dma_device_type": 1 00:12:40.581 }, 00:12:40.581 { 00:12:40.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.581 "dma_device_type": 2 00:12:40.581 }, 00:12:40.581 { 00:12:40.581 "dma_device_id": "system", 00:12:40.581 "dma_device_type": 1 00:12:40.581 }, 00:12:40.581 { 00:12:40.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.581 "dma_device_type": 2 00:12:40.581 } 00:12:40.581 ], 00:12:40.581 "driver_specific": { 00:12:40.581 "raid": { 00:12:40.581 "uuid": "8dbc6bfb-f44f-4666-8fca-7cf6f869c8bb", 00:12:40.581 "strip_size_kb": 64, 00:12:40.581 "state": "online", 00:12:40.581 "raid_level": "raid0", 00:12:40.581 "superblock": true, 00:12:40.581 "num_base_bdevs": 4, 00:12:40.581 "num_base_bdevs_discovered": 4, 00:12:40.581 "num_base_bdevs_operational": 4, 00:12:40.581 "base_bdevs_list": [ 00:12:40.581 { 00:12:40.581 "name": "NewBaseBdev", 00:12:40.581 "uuid": "138d0cc3-7423-46da-a0bc-33a346b2938f", 00:12:40.581 "is_configured": true, 00:12:40.582 "data_offset": 2048, 00:12:40.582 "data_size": 63488 00:12:40.582 }, 00:12:40.582 { 00:12:40.582 "name": "BaseBdev2", 00:12:40.582 "uuid": "3390c7bb-2ca5-446e-9746-dc5d621b6884", 00:12:40.582 "is_configured": true, 00:12:40.582 "data_offset": 2048, 00:12:40.582 "data_size": 63488 00:12:40.582 }, 00:12:40.582 { 00:12:40.582 "name": "BaseBdev3", 00:12:40.582 "uuid": "10a45057-a340-487c-83f7-55087385b7a8", 00:12:40.582 "is_configured": true, 00:12:40.582 "data_offset": 2048, 00:12:40.582 "data_size": 63488 00:12:40.582 }, 00:12:40.582 { 00:12:40.582 "name": "BaseBdev4", 00:12:40.582 "uuid": "3cc1f9d9-ba8e-4a10-8409-c76c8d26a71c", 00:12:40.582 "is_configured": true, 00:12:40.582 "data_offset": 2048, 00:12:40.582 "data_size": 63488 00:12:40.582 } 00:12:40.582 ] 00:12:40.582 } 00:12:40.582 } 00:12:40.582 }' 00:12:40.582 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:40.582 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:40.582 BaseBdev2 00:12:40.582 BaseBdev3 00:12:40.582 BaseBdev4' 00:12:40.582 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.582 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:40.582 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.582 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:40.582 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.582 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.582 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.582 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.582 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.582 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.582 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.582 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.582 20:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:40.582 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.582 20:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.841 [2024-12-05 20:05:42.146716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:40.841 [2024-12-05 20:05:42.146796] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.841 [2024-12-05 20:05:42.146930] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.841 [2024-12-05 20:05:42.147042] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.841 [2024-12-05 20:05:42.147095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70176 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70176 ']' 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70176 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70176 00:12:40.841 killing process with pid 70176 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.841 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.842 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70176' 00:12:40.842 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70176 00:12:40.842 20:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70176 00:12:40.842 [2024-12-05 20:05:42.193667] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:41.415 [2024-12-05 20:05:42.603462] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.356 20:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:42.356 00:12:42.356 real 0m11.791s 00:12:42.356 user 0m18.742s 00:12:42.356 sys 0m2.004s 00:12:42.356 20:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.356 20:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.356 ************************************ 00:12:42.356 END TEST raid_state_function_test_sb 00:12:42.356 ************************************ 00:12:42.615 20:05:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:12:42.615 20:05:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:42.615 20:05:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.615 20:05:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.615 ************************************ 00:12:42.615 START TEST raid_superblock_test 00:12:42.615 ************************************ 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70849 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70849 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70849 ']' 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.615 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.615 [2024-12-05 20:05:43.944738] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:12:42.615 [2024-12-05 20:05:43.944872] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70849 ] 00:12:42.886 [2024-12-05 20:05:44.108155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.886 [2024-12-05 20:05:44.224142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.196 [2024-12-05 20:05:44.431968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.196 [2024-12-05 20:05:44.432016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.456 malloc1 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.456 [2024-12-05 20:05:44.834762] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:43.456 [2024-12-05 20:05:44.834866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.456 [2024-12-05 20:05:44.834922] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:43.456 [2024-12-05 20:05:44.834954] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.456 [2024-12-05 20:05:44.837163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.456 [2024-12-05 20:05:44.837255] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:43.456 pt1 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.456 malloc2 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.456 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.456 [2024-12-05 20:05:44.885124] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:43.456 [2024-12-05 20:05:44.885237] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.456 [2024-12-05 20:05:44.885268] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:43.456 [2024-12-05 20:05:44.885278] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.456 [2024-12-05 20:05:44.887559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.456 [2024-12-05 20:05:44.887597] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:43.717 pt2 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.717 malloc3 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.717 [2024-12-05 20:05:44.956173] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:43.717 [2024-12-05 20:05:44.956302] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.717 [2024-12-05 20:05:44.956349] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:43.717 [2024-12-05 20:05:44.956405] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.717 [2024-12-05 20:05:44.958626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.717 [2024-12-05 20:05:44.958705] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:43.717 pt3 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.717 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.717 malloc4 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.717 [2024-12-05 20:05:45.014552] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:43.717 [2024-12-05 20:05:45.014653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.717 [2024-12-05 20:05:45.014694] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:43.717 [2024-12-05 20:05:45.014723] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.717 [2024-12-05 20:05:45.016874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.717 [2024-12-05 20:05:45.016961] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:43.717 pt4 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.717 [2024-12-05 20:05:45.026564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:43.717 [2024-12-05 20:05:45.028430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:43.717 [2024-12-05 20:05:45.028585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:43.717 [2024-12-05 20:05:45.028682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:43.717 [2024-12-05 20:05:45.028934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:43.717 [2024-12-05 20:05:45.028987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:43.717 [2024-12-05 20:05:45.029297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:43.717 [2024-12-05 20:05:45.029528] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:43.717 [2024-12-05 20:05:45.029576] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:43.717 [2024-12-05 20:05:45.029779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.717 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.717 "name": "raid_bdev1", 00:12:43.717 "uuid": "608b4de5-87e4-4309-8ac4-3a21bfe898ff", 00:12:43.717 "strip_size_kb": 64, 00:12:43.717 "state": "online", 00:12:43.717 "raid_level": "raid0", 00:12:43.717 "superblock": true, 00:12:43.717 "num_base_bdevs": 4, 00:12:43.717 "num_base_bdevs_discovered": 4, 00:12:43.717 "num_base_bdevs_operational": 4, 00:12:43.717 "base_bdevs_list": [ 00:12:43.717 { 00:12:43.717 "name": "pt1", 00:12:43.717 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:43.717 "is_configured": true, 00:12:43.717 "data_offset": 2048, 00:12:43.717 "data_size": 63488 00:12:43.717 }, 00:12:43.717 { 00:12:43.717 "name": "pt2", 00:12:43.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:43.717 "is_configured": true, 00:12:43.717 "data_offset": 2048, 00:12:43.717 "data_size": 63488 00:12:43.717 }, 00:12:43.717 { 00:12:43.717 "name": "pt3", 00:12:43.717 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:43.717 "is_configured": true, 00:12:43.717 "data_offset": 2048, 00:12:43.717 "data_size": 63488 00:12:43.717 }, 00:12:43.717 { 00:12:43.717 "name": "pt4", 00:12:43.717 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:43.718 "is_configured": true, 00:12:43.718 "data_offset": 2048, 00:12:43.718 "data_size": 63488 00:12:43.718 } 00:12:43.718 ] 00:12:43.718 }' 00:12:43.718 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.718 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.286 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:44.286 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:44.286 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:44.286 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:44.286 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:44.286 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:44.286 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:44.286 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:44.286 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.286 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.286 [2024-12-05 20:05:45.466218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.286 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.286 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:44.286 "name": "raid_bdev1", 00:12:44.286 "aliases": [ 00:12:44.286 "608b4de5-87e4-4309-8ac4-3a21bfe898ff" 00:12:44.286 ], 00:12:44.286 "product_name": "Raid Volume", 00:12:44.286 "block_size": 512, 00:12:44.286 "num_blocks": 253952, 00:12:44.286 "uuid": "608b4de5-87e4-4309-8ac4-3a21bfe898ff", 00:12:44.286 "assigned_rate_limits": { 00:12:44.286 "rw_ios_per_sec": 0, 00:12:44.286 "rw_mbytes_per_sec": 0, 00:12:44.286 "r_mbytes_per_sec": 0, 00:12:44.286 "w_mbytes_per_sec": 0 00:12:44.286 }, 00:12:44.286 "claimed": false, 00:12:44.286 "zoned": false, 00:12:44.286 "supported_io_types": { 00:12:44.286 "read": true, 00:12:44.286 "write": true, 00:12:44.286 "unmap": true, 00:12:44.286 "flush": true, 00:12:44.286 "reset": true, 00:12:44.286 "nvme_admin": false, 00:12:44.286 "nvme_io": false, 00:12:44.286 "nvme_io_md": false, 00:12:44.286 "write_zeroes": true, 00:12:44.286 "zcopy": false, 00:12:44.286 "get_zone_info": false, 00:12:44.286 "zone_management": false, 00:12:44.286 "zone_append": false, 00:12:44.286 "compare": false, 00:12:44.286 "compare_and_write": false, 00:12:44.286 "abort": false, 00:12:44.286 "seek_hole": false, 00:12:44.286 "seek_data": false, 00:12:44.286 "copy": false, 00:12:44.286 "nvme_iov_md": false 00:12:44.286 }, 00:12:44.286 "memory_domains": [ 00:12:44.286 { 00:12:44.286 "dma_device_id": "system", 00:12:44.286 "dma_device_type": 1 00:12:44.286 }, 00:12:44.286 { 00:12:44.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.286 "dma_device_type": 2 00:12:44.286 }, 00:12:44.286 { 00:12:44.286 "dma_device_id": "system", 00:12:44.286 "dma_device_type": 1 00:12:44.286 }, 00:12:44.286 { 00:12:44.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.286 "dma_device_type": 2 00:12:44.286 }, 00:12:44.286 { 00:12:44.286 "dma_device_id": "system", 00:12:44.286 "dma_device_type": 1 00:12:44.286 }, 00:12:44.286 { 00:12:44.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.286 "dma_device_type": 2 00:12:44.286 }, 00:12:44.286 { 00:12:44.286 "dma_device_id": "system", 00:12:44.286 "dma_device_type": 1 00:12:44.286 }, 00:12:44.286 { 00:12:44.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.286 "dma_device_type": 2 00:12:44.286 } 00:12:44.286 ], 00:12:44.286 "driver_specific": { 00:12:44.286 "raid": { 00:12:44.286 "uuid": "608b4de5-87e4-4309-8ac4-3a21bfe898ff", 00:12:44.286 "strip_size_kb": 64, 00:12:44.286 "state": "online", 00:12:44.286 "raid_level": "raid0", 00:12:44.286 "superblock": true, 00:12:44.286 "num_base_bdevs": 4, 00:12:44.286 "num_base_bdevs_discovered": 4, 00:12:44.286 "num_base_bdevs_operational": 4, 00:12:44.286 "base_bdevs_list": [ 00:12:44.286 { 00:12:44.287 "name": "pt1", 00:12:44.287 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:44.287 "is_configured": true, 00:12:44.287 "data_offset": 2048, 00:12:44.287 "data_size": 63488 00:12:44.287 }, 00:12:44.287 { 00:12:44.287 "name": "pt2", 00:12:44.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:44.287 "is_configured": true, 00:12:44.287 "data_offset": 2048, 00:12:44.287 "data_size": 63488 00:12:44.287 }, 00:12:44.287 { 00:12:44.287 "name": "pt3", 00:12:44.287 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:44.287 "is_configured": true, 00:12:44.287 "data_offset": 2048, 00:12:44.287 "data_size": 63488 00:12:44.287 }, 00:12:44.287 { 00:12:44.287 "name": "pt4", 00:12:44.287 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:44.287 "is_configured": true, 00:12:44.287 "data_offset": 2048, 00:12:44.287 "data_size": 63488 00:12:44.287 } 00:12:44.287 ] 00:12:44.287 } 00:12:44.287 } 00:12:44.287 }' 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:44.287 pt2 00:12:44.287 pt3 00:12:44.287 pt4' 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.287 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.546 [2024-12-05 20:05:45.797577] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=608b4de5-87e4-4309-8ac4-3a21bfe898ff 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 608b4de5-87e4-4309-8ac4-3a21bfe898ff ']' 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.546 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.546 [2024-12-05 20:05:45.845143] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:44.547 [2024-12-05 20:05:45.845171] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.547 [2024-12-05 20:05:45.845258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.547 [2024-12-05 20:05:45.845328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.547 [2024-12-05 20:05:45.845342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.547 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.807 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:44.807 20:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:44.807 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:44.807 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:44.807 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:44.807 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.807 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:44.807 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.807 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:44.807 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.807 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.807 [2024-12-05 20:05:45.988937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:44.807 [2024-12-05 20:05:45.990873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:44.807 [2024-12-05 20:05:45.990935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:44.807 [2024-12-05 20:05:45.990973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:44.807 [2024-12-05 20:05:45.991042] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:44.807 [2024-12-05 20:05:45.991096] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:44.807 [2024-12-05 20:05:45.991118] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:44.807 [2024-12-05 20:05:45.991138] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:44.807 [2024-12-05 20:05:45.991153] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:44.807 [2024-12-05 20:05:45.991168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:44.807 request: 00:12:44.807 { 00:12:44.807 "name": "raid_bdev1", 00:12:44.807 "raid_level": "raid0", 00:12:44.807 "base_bdevs": [ 00:12:44.807 "malloc1", 00:12:44.807 "malloc2", 00:12:44.807 "malloc3", 00:12:44.807 "malloc4" 00:12:44.807 ], 00:12:44.807 "strip_size_kb": 64, 00:12:44.807 "superblock": false, 00:12:44.807 "method": "bdev_raid_create", 00:12:44.807 "req_id": 1 00:12:44.807 } 00:12:44.807 Got JSON-RPC error response 00:12:44.807 response: 00:12:44.807 { 00:12:44.807 "code": -17, 00:12:44.807 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:44.807 } 00:12:44.807 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:44.807 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:44.807 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:44.807 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:44.807 20:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:44.807 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.807 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.807 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.807 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:44.807 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.807 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:44.807 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:44.807 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:44.807 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.807 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.807 [2024-12-05 20:05:46.056774] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:44.807 [2024-12-05 20:05:46.056882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.807 [2024-12-05 20:05:46.056944] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:44.807 [2024-12-05 20:05:46.056981] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.807 [2024-12-05 20:05:46.059324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.807 [2024-12-05 20:05:46.059403] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:44.807 [2024-12-05 20:05:46.059509] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:44.807 [2024-12-05 20:05:46.059594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:44.807 pt1 00:12:44.807 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.807 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:44.807 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.807 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.807 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.807 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.808 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.808 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.808 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.808 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.808 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.808 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.808 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.808 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.808 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.808 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.808 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.808 "name": "raid_bdev1", 00:12:44.808 "uuid": "608b4de5-87e4-4309-8ac4-3a21bfe898ff", 00:12:44.808 "strip_size_kb": 64, 00:12:44.808 "state": "configuring", 00:12:44.808 "raid_level": "raid0", 00:12:44.808 "superblock": true, 00:12:44.808 "num_base_bdevs": 4, 00:12:44.808 "num_base_bdevs_discovered": 1, 00:12:44.808 "num_base_bdevs_operational": 4, 00:12:44.808 "base_bdevs_list": [ 00:12:44.808 { 00:12:44.808 "name": "pt1", 00:12:44.808 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:44.808 "is_configured": true, 00:12:44.808 "data_offset": 2048, 00:12:44.808 "data_size": 63488 00:12:44.808 }, 00:12:44.808 { 00:12:44.808 "name": null, 00:12:44.808 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:44.808 "is_configured": false, 00:12:44.808 "data_offset": 2048, 00:12:44.808 "data_size": 63488 00:12:44.808 }, 00:12:44.808 { 00:12:44.808 "name": null, 00:12:44.808 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:44.808 "is_configured": false, 00:12:44.808 "data_offset": 2048, 00:12:44.808 "data_size": 63488 00:12:44.808 }, 00:12:44.808 { 00:12:44.808 "name": null, 00:12:44.808 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:44.808 "is_configured": false, 00:12:44.808 "data_offset": 2048, 00:12:44.808 "data_size": 63488 00:12:44.808 } 00:12:44.808 ] 00:12:44.808 }' 00:12:44.808 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.808 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.376 [2024-12-05 20:05:46.544074] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:45.376 [2024-12-05 20:05:46.544156] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.376 [2024-12-05 20:05:46.544178] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:45.376 [2024-12-05 20:05:46.544190] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.376 [2024-12-05 20:05:46.544735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.376 [2024-12-05 20:05:46.544774] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:45.376 [2024-12-05 20:05:46.544877] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:45.376 [2024-12-05 20:05:46.544917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:45.376 pt2 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.376 [2024-12-05 20:05:46.552060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.376 "name": "raid_bdev1", 00:12:45.376 "uuid": "608b4de5-87e4-4309-8ac4-3a21bfe898ff", 00:12:45.376 "strip_size_kb": 64, 00:12:45.376 "state": "configuring", 00:12:45.376 "raid_level": "raid0", 00:12:45.376 "superblock": true, 00:12:45.376 "num_base_bdevs": 4, 00:12:45.376 "num_base_bdevs_discovered": 1, 00:12:45.376 "num_base_bdevs_operational": 4, 00:12:45.376 "base_bdevs_list": [ 00:12:45.376 { 00:12:45.376 "name": "pt1", 00:12:45.376 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:45.376 "is_configured": true, 00:12:45.376 "data_offset": 2048, 00:12:45.376 "data_size": 63488 00:12:45.376 }, 00:12:45.376 { 00:12:45.376 "name": null, 00:12:45.376 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:45.376 "is_configured": false, 00:12:45.376 "data_offset": 0, 00:12:45.376 "data_size": 63488 00:12:45.376 }, 00:12:45.376 { 00:12:45.376 "name": null, 00:12:45.376 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:45.376 "is_configured": false, 00:12:45.376 "data_offset": 2048, 00:12:45.376 "data_size": 63488 00:12:45.376 }, 00:12:45.376 { 00:12:45.376 "name": null, 00:12:45.376 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:45.376 "is_configured": false, 00:12:45.376 "data_offset": 2048, 00:12:45.376 "data_size": 63488 00:12:45.376 } 00:12:45.376 ] 00:12:45.376 }' 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.376 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.636 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:45.636 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:45.636 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:45.636 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.636 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.636 [2024-12-05 20:05:46.987299] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:45.636 [2024-12-05 20:05:46.987415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.636 [2024-12-05 20:05:46.987456] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:45.636 [2024-12-05 20:05:46.987483] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.636 [2024-12-05 20:05:46.987972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.636 [2024-12-05 20:05:46.988038] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:45.636 [2024-12-05 20:05:46.988163] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:45.636 [2024-12-05 20:05:46.988223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:45.636 pt2 00:12:45.636 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.636 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:45.636 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:45.636 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:45.636 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.636 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.636 [2024-12-05 20:05:46.999244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:45.636 [2024-12-05 20:05:46.999330] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.636 [2024-12-05 20:05:46.999366] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:45.636 [2024-12-05 20:05:46.999421] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.636 [2024-12-05 20:05:46.999817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.636 [2024-12-05 20:05:46.999877] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:45.636 [2024-12-05 20:05:46.999985] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:45.636 [2024-12-05 20:05:47.000042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:45.636 pt3 00:12:45.636 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.636 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:45.636 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:45.636 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:45.636 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.636 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.636 [2024-12-05 20:05:47.011197] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:45.636 [2024-12-05 20:05:47.011296] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.636 [2024-12-05 20:05:47.011327] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:45.636 [2024-12-05 20:05:47.011353] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.636 [2024-12-05 20:05:47.011716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.636 [2024-12-05 20:05:47.011772] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:45.636 [2024-12-05 20:05:47.011841] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:45.636 [2024-12-05 20:05:47.011861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:45.636 [2024-12-05 20:05:47.012029] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:45.636 [2024-12-05 20:05:47.012039] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:45.636 [2024-12-05 20:05:47.012289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:45.636 [2024-12-05 20:05:47.012439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:45.636 [2024-12-05 20:05:47.012452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:45.636 [2024-12-05 20:05:47.012582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.636 pt4 00:12:45.636 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.636 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:45.636 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:45.636 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:45.636 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.636 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.636 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:45.637 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.637 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.637 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.637 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.637 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.637 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.637 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.637 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.637 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.637 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.637 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.637 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.637 "name": "raid_bdev1", 00:12:45.637 "uuid": "608b4de5-87e4-4309-8ac4-3a21bfe898ff", 00:12:45.637 "strip_size_kb": 64, 00:12:45.637 "state": "online", 00:12:45.637 "raid_level": "raid0", 00:12:45.637 "superblock": true, 00:12:45.637 "num_base_bdevs": 4, 00:12:45.637 "num_base_bdevs_discovered": 4, 00:12:45.637 "num_base_bdevs_operational": 4, 00:12:45.637 "base_bdevs_list": [ 00:12:45.637 { 00:12:45.637 "name": "pt1", 00:12:45.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:45.637 "is_configured": true, 00:12:45.637 "data_offset": 2048, 00:12:45.637 "data_size": 63488 00:12:45.637 }, 00:12:45.637 { 00:12:45.637 "name": "pt2", 00:12:45.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:45.637 "is_configured": true, 00:12:45.637 "data_offset": 2048, 00:12:45.637 "data_size": 63488 00:12:45.637 }, 00:12:45.637 { 00:12:45.637 "name": "pt3", 00:12:45.637 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:45.637 "is_configured": true, 00:12:45.637 "data_offset": 2048, 00:12:45.637 "data_size": 63488 00:12:45.637 }, 00:12:45.637 { 00:12:45.637 "name": "pt4", 00:12:45.637 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:45.637 "is_configured": true, 00:12:45.637 "data_offset": 2048, 00:12:45.637 "data_size": 63488 00:12:45.637 } 00:12:45.637 ] 00:12:45.637 }' 00:12:45.637 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.637 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.206 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:46.206 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:46.206 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:46.206 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:46.206 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:46.206 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:46.206 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:46.206 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:46.206 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.206 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.206 [2024-12-05 20:05:47.486793] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.206 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.206 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:46.206 "name": "raid_bdev1", 00:12:46.206 "aliases": [ 00:12:46.206 "608b4de5-87e4-4309-8ac4-3a21bfe898ff" 00:12:46.206 ], 00:12:46.206 "product_name": "Raid Volume", 00:12:46.206 "block_size": 512, 00:12:46.206 "num_blocks": 253952, 00:12:46.206 "uuid": "608b4de5-87e4-4309-8ac4-3a21bfe898ff", 00:12:46.206 "assigned_rate_limits": { 00:12:46.206 "rw_ios_per_sec": 0, 00:12:46.206 "rw_mbytes_per_sec": 0, 00:12:46.206 "r_mbytes_per_sec": 0, 00:12:46.206 "w_mbytes_per_sec": 0 00:12:46.206 }, 00:12:46.206 "claimed": false, 00:12:46.206 "zoned": false, 00:12:46.206 "supported_io_types": { 00:12:46.206 "read": true, 00:12:46.206 "write": true, 00:12:46.206 "unmap": true, 00:12:46.206 "flush": true, 00:12:46.206 "reset": true, 00:12:46.206 "nvme_admin": false, 00:12:46.206 "nvme_io": false, 00:12:46.206 "nvme_io_md": false, 00:12:46.206 "write_zeroes": true, 00:12:46.206 "zcopy": false, 00:12:46.206 "get_zone_info": false, 00:12:46.206 "zone_management": false, 00:12:46.206 "zone_append": false, 00:12:46.206 "compare": false, 00:12:46.206 "compare_and_write": false, 00:12:46.206 "abort": false, 00:12:46.206 "seek_hole": false, 00:12:46.206 "seek_data": false, 00:12:46.206 "copy": false, 00:12:46.206 "nvme_iov_md": false 00:12:46.206 }, 00:12:46.206 "memory_domains": [ 00:12:46.206 { 00:12:46.206 "dma_device_id": "system", 00:12:46.206 "dma_device_type": 1 00:12:46.206 }, 00:12:46.206 { 00:12:46.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.206 "dma_device_type": 2 00:12:46.206 }, 00:12:46.206 { 00:12:46.206 "dma_device_id": "system", 00:12:46.207 "dma_device_type": 1 00:12:46.207 }, 00:12:46.207 { 00:12:46.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.207 "dma_device_type": 2 00:12:46.207 }, 00:12:46.207 { 00:12:46.207 "dma_device_id": "system", 00:12:46.207 "dma_device_type": 1 00:12:46.207 }, 00:12:46.207 { 00:12:46.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.207 "dma_device_type": 2 00:12:46.207 }, 00:12:46.207 { 00:12:46.207 "dma_device_id": "system", 00:12:46.207 "dma_device_type": 1 00:12:46.207 }, 00:12:46.207 { 00:12:46.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.207 "dma_device_type": 2 00:12:46.207 } 00:12:46.207 ], 00:12:46.207 "driver_specific": { 00:12:46.207 "raid": { 00:12:46.207 "uuid": "608b4de5-87e4-4309-8ac4-3a21bfe898ff", 00:12:46.207 "strip_size_kb": 64, 00:12:46.207 "state": "online", 00:12:46.207 "raid_level": "raid0", 00:12:46.207 "superblock": true, 00:12:46.207 "num_base_bdevs": 4, 00:12:46.207 "num_base_bdevs_discovered": 4, 00:12:46.207 "num_base_bdevs_operational": 4, 00:12:46.207 "base_bdevs_list": [ 00:12:46.207 { 00:12:46.207 "name": "pt1", 00:12:46.207 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:46.207 "is_configured": true, 00:12:46.207 "data_offset": 2048, 00:12:46.207 "data_size": 63488 00:12:46.207 }, 00:12:46.207 { 00:12:46.207 "name": "pt2", 00:12:46.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:46.207 "is_configured": true, 00:12:46.207 "data_offset": 2048, 00:12:46.207 "data_size": 63488 00:12:46.207 }, 00:12:46.207 { 00:12:46.207 "name": "pt3", 00:12:46.207 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:46.207 "is_configured": true, 00:12:46.207 "data_offset": 2048, 00:12:46.207 "data_size": 63488 00:12:46.207 }, 00:12:46.207 { 00:12:46.207 "name": "pt4", 00:12:46.207 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:46.207 "is_configured": true, 00:12:46.207 "data_offset": 2048, 00:12:46.207 "data_size": 63488 00:12:46.207 } 00:12:46.207 ] 00:12:46.207 } 00:12:46.207 } 00:12:46.207 }' 00:12:46.207 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:46.207 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:46.207 pt2 00:12:46.207 pt3 00:12:46.207 pt4' 00:12:46.207 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.207 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:46.207 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.207 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:46.207 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.207 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.207 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.467 [2024-12-05 20:05:47.822250] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 608b4de5-87e4-4309-8ac4-3a21bfe898ff '!=' 608b4de5-87e4-4309-8ac4-3a21bfe898ff ']' 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70849 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70849 ']' 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70849 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70849 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70849' 00:12:46.467 killing process with pid 70849 00:12:46.467 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70849 00:12:46.467 [2024-12-05 20:05:47.887645] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:46.467 [2024-12-05 20:05:47.887793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.467 [2024-12-05 20:05:47.887927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 20:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70849 00:12:46.467 ee all in destruct 00:12:46.467 [2024-12-05 20:05:47.887978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:47.042 [2024-12-05 20:05:48.295043] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:48.420 20:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:48.420 00:12:48.420 real 0m5.589s 00:12:48.420 user 0m8.050s 00:12:48.420 sys 0m0.957s 00:12:48.420 20:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.420 20:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.420 ************************************ 00:12:48.420 END TEST raid_superblock_test 00:12:48.420 ************************************ 00:12:48.420 20:05:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:12:48.420 20:05:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:48.420 20:05:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.420 20:05:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:48.420 ************************************ 00:12:48.420 START TEST raid_read_error_test 00:12:48.420 ************************************ 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZgVlgdGaok 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71114 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71114 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71114 ']' 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:48.420 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.420 [2024-12-05 20:05:49.610277] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:12:48.420 [2024-12-05 20:05:49.610393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71114 ] 00:12:48.420 [2024-12-05 20:05:49.786684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.680 [2024-12-05 20:05:49.906112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.680 [2024-12-05 20:05:50.112647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.680 [2024-12-05 20:05:50.112680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.249 BaseBdev1_malloc 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.249 true 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.249 [2024-12-05 20:05:50.557060] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:49.249 [2024-12-05 20:05:50.557115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.249 [2024-12-05 20:05:50.557136] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:49.249 [2024-12-05 20:05:50.557147] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.249 [2024-12-05 20:05:50.559288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.249 [2024-12-05 20:05:50.559418] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:49.249 BaseBdev1 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.249 BaseBdev2_malloc 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.249 true 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.249 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.249 [2024-12-05 20:05:50.623339] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:49.249 [2024-12-05 20:05:50.623392] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.250 [2024-12-05 20:05:50.623410] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:49.250 [2024-12-05 20:05:50.623419] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.250 [2024-12-05 20:05:50.625733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.250 [2024-12-05 20:05:50.625816] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:49.250 BaseBdev2 00:12:49.250 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.250 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:49.250 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:49.250 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.250 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.509 BaseBdev3_malloc 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.509 true 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.509 [2024-12-05 20:05:50.703963] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:49.509 [2024-12-05 20:05:50.704066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.509 [2024-12-05 20:05:50.704088] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:49.509 [2024-12-05 20:05:50.704099] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.509 [2024-12-05 20:05:50.706383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.509 [2024-12-05 20:05:50.706421] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:49.509 BaseBdev3 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.509 BaseBdev4_malloc 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.509 true 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.509 [2024-12-05 20:05:50.770505] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:49.509 [2024-12-05 20:05:50.770559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.509 [2024-12-05 20:05:50.770577] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:49.509 [2024-12-05 20:05:50.770587] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.509 [2024-12-05 20:05:50.772856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.509 [2024-12-05 20:05:50.772910] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:49.509 BaseBdev4 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.509 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.509 [2024-12-05 20:05:50.782554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.509 [2024-12-05 20:05:50.784678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.509 [2024-12-05 20:05:50.784774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:49.509 [2024-12-05 20:05:50.784854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:49.509 [2024-12-05 20:05:50.785145] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:49.509 [2024-12-05 20:05:50.785174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:49.509 [2024-12-05 20:05:50.785500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:49.509 [2024-12-05 20:05:50.785719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:49.509 [2024-12-05 20:05:50.785734] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:49.509 [2024-12-05 20:05:50.785963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.510 "name": "raid_bdev1", 00:12:49.510 "uuid": "6cf5bf06-cb8d-4b9a-8526-60e846b2d791", 00:12:49.510 "strip_size_kb": 64, 00:12:49.510 "state": "online", 00:12:49.510 "raid_level": "raid0", 00:12:49.510 "superblock": true, 00:12:49.510 "num_base_bdevs": 4, 00:12:49.510 "num_base_bdevs_discovered": 4, 00:12:49.510 "num_base_bdevs_operational": 4, 00:12:49.510 "base_bdevs_list": [ 00:12:49.510 { 00:12:49.510 "name": "BaseBdev1", 00:12:49.510 "uuid": "cca8972e-17b9-5311-b0c5-1d2892ce6f0b", 00:12:49.510 "is_configured": true, 00:12:49.510 "data_offset": 2048, 00:12:49.510 "data_size": 63488 00:12:49.510 }, 00:12:49.510 { 00:12:49.510 "name": "BaseBdev2", 00:12:49.510 "uuid": "85f73893-cfcc-5395-8165-4345605d3581", 00:12:49.510 "is_configured": true, 00:12:49.510 "data_offset": 2048, 00:12:49.510 "data_size": 63488 00:12:49.510 }, 00:12:49.510 { 00:12:49.510 "name": "BaseBdev3", 00:12:49.510 "uuid": "4b1cd983-3bd5-5686-967e-8f431e47685f", 00:12:49.510 "is_configured": true, 00:12:49.510 "data_offset": 2048, 00:12:49.510 "data_size": 63488 00:12:49.510 }, 00:12:49.510 { 00:12:49.510 "name": "BaseBdev4", 00:12:49.510 "uuid": "3185685c-3cdf-56b8-9b13-22d42bfa5828", 00:12:49.510 "is_configured": true, 00:12:49.510 "data_offset": 2048, 00:12:49.510 "data_size": 63488 00:12:49.510 } 00:12:49.510 ] 00:12:49.510 }' 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.510 20:05:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.078 20:05:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:50.078 20:05:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:50.078 [2024-12-05 20:05:51.374909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.017 "name": "raid_bdev1", 00:12:51.017 "uuid": "6cf5bf06-cb8d-4b9a-8526-60e846b2d791", 00:12:51.017 "strip_size_kb": 64, 00:12:51.017 "state": "online", 00:12:51.017 "raid_level": "raid0", 00:12:51.017 "superblock": true, 00:12:51.017 "num_base_bdevs": 4, 00:12:51.017 "num_base_bdevs_discovered": 4, 00:12:51.017 "num_base_bdevs_operational": 4, 00:12:51.017 "base_bdevs_list": [ 00:12:51.017 { 00:12:51.017 "name": "BaseBdev1", 00:12:51.017 "uuid": "cca8972e-17b9-5311-b0c5-1d2892ce6f0b", 00:12:51.017 "is_configured": true, 00:12:51.017 "data_offset": 2048, 00:12:51.017 "data_size": 63488 00:12:51.017 }, 00:12:51.017 { 00:12:51.017 "name": "BaseBdev2", 00:12:51.017 "uuid": "85f73893-cfcc-5395-8165-4345605d3581", 00:12:51.017 "is_configured": true, 00:12:51.017 "data_offset": 2048, 00:12:51.017 "data_size": 63488 00:12:51.017 }, 00:12:51.017 { 00:12:51.017 "name": "BaseBdev3", 00:12:51.017 "uuid": "4b1cd983-3bd5-5686-967e-8f431e47685f", 00:12:51.017 "is_configured": true, 00:12:51.017 "data_offset": 2048, 00:12:51.017 "data_size": 63488 00:12:51.017 }, 00:12:51.017 { 00:12:51.017 "name": "BaseBdev4", 00:12:51.017 "uuid": "3185685c-3cdf-56b8-9b13-22d42bfa5828", 00:12:51.017 "is_configured": true, 00:12:51.017 "data_offset": 2048, 00:12:51.017 "data_size": 63488 00:12:51.017 } 00:12:51.017 ] 00:12:51.017 }' 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.017 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.586 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:51.586 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.586 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.586 [2024-12-05 20:05:52.745214] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:51.586 [2024-12-05 20:05:52.745248] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.586 [2024-12-05 20:05:52.748062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.586 [2024-12-05 20:05:52.748125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.586 [2024-12-05 20:05:52.748169] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.586 [2024-12-05 20:05:52.748181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:51.586 { 00:12:51.586 "results": [ 00:12:51.586 { 00:12:51.586 "job": "raid_bdev1", 00:12:51.586 "core_mask": "0x1", 00:12:51.586 "workload": "randrw", 00:12:51.586 "percentage": 50, 00:12:51.586 "status": "finished", 00:12:51.586 "queue_depth": 1, 00:12:51.586 "io_size": 131072, 00:12:51.586 "runtime": 1.370945, 00:12:51.586 "iops": 14964.130581460233, 00:12:51.586 "mibps": 1870.516322682529, 00:12:51.586 "io_failed": 1, 00:12:51.586 "io_timeout": 0, 00:12:51.586 "avg_latency_us": 92.82149878122603, 00:12:51.586 "min_latency_us": 27.72401746724891, 00:12:51.586 "max_latency_us": 1452.380786026201 00:12:51.586 } 00:12:51.586 ], 00:12:51.586 "core_count": 1 00:12:51.586 } 00:12:51.586 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.586 20:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71114 00:12:51.586 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71114 ']' 00:12:51.586 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71114 00:12:51.586 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:51.586 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.586 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71114 00:12:51.586 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.586 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.586 killing process with pid 71114 00:12:51.586 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71114' 00:12:51.586 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71114 00:12:51.586 [2024-12-05 20:05:52.785997] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.586 20:05:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71114 00:12:51.844 [2024-12-05 20:05:53.113025] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:53.224 20:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:53.224 20:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZgVlgdGaok 00:12:53.224 20:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:53.224 20:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:53.224 20:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:53.224 20:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:53.224 20:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:53.224 20:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:53.224 00:12:53.224 real 0m4.832s 00:12:53.224 user 0m5.747s 00:12:53.224 sys 0m0.594s 00:12:53.224 20:05:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.224 20:05:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.224 ************************************ 00:12:53.225 END TEST raid_read_error_test 00:12:53.225 ************************************ 00:12:53.225 20:05:54 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:12:53.225 20:05:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:53.225 20:05:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.225 20:05:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:53.225 ************************************ 00:12:53.225 START TEST raid_write_error_test 00:12:53.225 ************************************ 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ManXRfQLm6 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71265 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71265 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71265 ']' 00:12:53.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.225 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.225 [2024-12-05 20:05:54.514598] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:12:53.225 [2024-12-05 20:05:54.514727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71265 ] 00:12:53.484 [2024-12-05 20:05:54.682801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.484 [2024-12-05 20:05:54.796567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.744 [2024-12-05 20:05:54.994002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.744 [2024-12-05 20:05:54.994048] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.003 BaseBdev1_malloc 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.003 true 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.003 [2024-12-05 20:05:55.410803] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:54.003 [2024-12-05 20:05:55.410856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.003 [2024-12-05 20:05:55.410892] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:54.003 [2024-12-05 20:05:55.410915] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.003 [2024-12-05 20:05:55.412984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.003 [2024-12-05 20:05:55.413022] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:54.003 BaseBdev1 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.003 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.262 BaseBdev2_malloc 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.262 true 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.262 [2024-12-05 20:05:55.477337] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:54.262 [2024-12-05 20:05:55.477393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.262 [2024-12-05 20:05:55.477410] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:54.262 [2024-12-05 20:05:55.477420] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.262 [2024-12-05 20:05:55.479493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.262 [2024-12-05 20:05:55.479546] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:54.262 BaseBdev2 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.262 BaseBdev3_malloc 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.262 true 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.262 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.263 [2024-12-05 20:05:55.555688] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:54.263 [2024-12-05 20:05:55.555741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.263 [2024-12-05 20:05:55.555758] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:54.263 [2024-12-05 20:05:55.555768] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.263 [2024-12-05 20:05:55.557865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.263 [2024-12-05 20:05:55.557914] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:54.263 BaseBdev3 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.263 BaseBdev4_malloc 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.263 true 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.263 [2024-12-05 20:05:55.621849] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:54.263 [2024-12-05 20:05:55.621930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.263 [2024-12-05 20:05:55.621954] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:54.263 [2024-12-05 20:05:55.621965] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.263 [2024-12-05 20:05:55.624138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.263 [2024-12-05 20:05:55.624234] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:54.263 BaseBdev4 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.263 [2024-12-05 20:05:55.633891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:54.263 [2024-12-05 20:05:55.635709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.263 [2024-12-05 20:05:55.635789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:54.263 [2024-12-05 20:05:55.635852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:54.263 [2024-12-05 20:05:55.636134] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:54.263 [2024-12-05 20:05:55.636153] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:54.263 [2024-12-05 20:05:55.636483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:54.263 [2024-12-05 20:05:55.636697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:54.263 [2024-12-05 20:05:55.636711] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:54.263 [2024-12-05 20:05:55.636915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.263 "name": "raid_bdev1", 00:12:54.263 "uuid": "9eec1902-87e7-418f-8815-ca7f3c96e5f8", 00:12:54.263 "strip_size_kb": 64, 00:12:54.263 "state": "online", 00:12:54.263 "raid_level": "raid0", 00:12:54.263 "superblock": true, 00:12:54.263 "num_base_bdevs": 4, 00:12:54.263 "num_base_bdevs_discovered": 4, 00:12:54.263 "num_base_bdevs_operational": 4, 00:12:54.263 "base_bdevs_list": [ 00:12:54.263 { 00:12:54.263 "name": "BaseBdev1", 00:12:54.263 "uuid": "02c2b72b-06eb-576b-a73d-6f29aeffef8d", 00:12:54.263 "is_configured": true, 00:12:54.263 "data_offset": 2048, 00:12:54.263 "data_size": 63488 00:12:54.263 }, 00:12:54.263 { 00:12:54.263 "name": "BaseBdev2", 00:12:54.263 "uuid": "f4abb786-1e30-5299-b1d4-da74210bcbfd", 00:12:54.263 "is_configured": true, 00:12:54.263 "data_offset": 2048, 00:12:54.263 "data_size": 63488 00:12:54.263 }, 00:12:54.263 { 00:12:54.263 "name": "BaseBdev3", 00:12:54.263 "uuid": "587b55b4-e59c-55c1-93d5-351c92e35991", 00:12:54.263 "is_configured": true, 00:12:54.263 "data_offset": 2048, 00:12:54.263 "data_size": 63488 00:12:54.263 }, 00:12:54.263 { 00:12:54.263 "name": "BaseBdev4", 00:12:54.263 "uuid": "57518765-f37e-5f44-8bd4-98634453203f", 00:12:54.263 "is_configured": true, 00:12:54.263 "data_offset": 2048, 00:12:54.263 "data_size": 63488 00:12:54.263 } 00:12:54.263 ] 00:12:54.263 }' 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.263 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.831 20:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:54.831 20:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:54.831 [2024-12-05 20:05:56.166246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.770 "name": "raid_bdev1", 00:12:55.770 "uuid": "9eec1902-87e7-418f-8815-ca7f3c96e5f8", 00:12:55.770 "strip_size_kb": 64, 00:12:55.770 "state": "online", 00:12:55.770 "raid_level": "raid0", 00:12:55.770 "superblock": true, 00:12:55.770 "num_base_bdevs": 4, 00:12:55.770 "num_base_bdevs_discovered": 4, 00:12:55.770 "num_base_bdevs_operational": 4, 00:12:55.770 "base_bdevs_list": [ 00:12:55.770 { 00:12:55.770 "name": "BaseBdev1", 00:12:55.770 "uuid": "02c2b72b-06eb-576b-a73d-6f29aeffef8d", 00:12:55.770 "is_configured": true, 00:12:55.770 "data_offset": 2048, 00:12:55.770 "data_size": 63488 00:12:55.770 }, 00:12:55.770 { 00:12:55.770 "name": "BaseBdev2", 00:12:55.770 "uuid": "f4abb786-1e30-5299-b1d4-da74210bcbfd", 00:12:55.770 "is_configured": true, 00:12:55.770 "data_offset": 2048, 00:12:55.770 "data_size": 63488 00:12:55.770 }, 00:12:55.770 { 00:12:55.770 "name": "BaseBdev3", 00:12:55.770 "uuid": "587b55b4-e59c-55c1-93d5-351c92e35991", 00:12:55.770 "is_configured": true, 00:12:55.770 "data_offset": 2048, 00:12:55.770 "data_size": 63488 00:12:55.770 }, 00:12:55.770 { 00:12:55.770 "name": "BaseBdev4", 00:12:55.770 "uuid": "57518765-f37e-5f44-8bd4-98634453203f", 00:12:55.770 "is_configured": true, 00:12:55.770 "data_offset": 2048, 00:12:55.770 "data_size": 63488 00:12:55.770 } 00:12:55.770 ] 00:12:55.770 }' 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.770 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.340 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:56.340 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.340 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.340 [2024-12-05 20:05:57.542435] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:56.340 [2024-12-05 20:05:57.542519] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:56.340 [2024-12-05 20:05:57.545369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.340 [2024-12-05 20:05:57.545473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.340 [2024-12-05 20:05:57.545525] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:56.340 [2024-12-05 20:05:57.545538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:56.340 { 00:12:56.340 "results": [ 00:12:56.340 { 00:12:56.340 "job": "raid_bdev1", 00:12:56.340 "core_mask": "0x1", 00:12:56.340 "workload": "randrw", 00:12:56.340 "percentage": 50, 00:12:56.340 "status": "finished", 00:12:56.340 "queue_depth": 1, 00:12:56.340 "io_size": 131072, 00:12:56.340 "runtime": 1.3771, 00:12:56.340 "iops": 15004.720063902403, 00:12:56.340 "mibps": 1875.5900079878004, 00:12:56.340 "io_failed": 1, 00:12:56.340 "io_timeout": 0, 00:12:56.340 "avg_latency_us": 92.58316537251461, 00:12:56.340 "min_latency_us": 27.165065502183406, 00:12:56.340 "max_latency_us": 1373.6803493449781 00:12:56.340 } 00:12:56.340 ], 00:12:56.340 "core_count": 1 00:12:56.340 } 00:12:56.340 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.340 20:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71265 00:12:56.340 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71265 ']' 00:12:56.340 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71265 00:12:56.340 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:56.340 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.341 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71265 00:12:56.341 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.341 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.341 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71265' 00:12:56.341 killing process with pid 71265 00:12:56.341 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71265 00:12:56.341 [2024-12-05 20:05:57.591144] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:56.341 20:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71265 00:12:56.601 [2024-12-05 20:05:57.915364] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:58.038 20:05:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ManXRfQLm6 00:12:58.038 20:05:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:58.038 20:05:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:58.038 ************************************ 00:12:58.038 END TEST raid_write_error_test 00:12:58.038 ************************************ 00:12:58.038 20:05:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:58.038 20:05:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:58.038 20:05:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:58.038 20:05:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:58.038 20:05:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:58.038 00:12:58.038 real 0m4.707s 00:12:58.038 user 0m5.528s 00:12:58.038 sys 0m0.590s 00:12:58.038 20:05:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.038 20:05:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.038 20:05:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:58.038 20:05:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:12:58.038 20:05:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:58.038 20:05:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.038 20:05:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:58.038 ************************************ 00:12:58.038 START TEST raid_state_function_test 00:12:58.038 ************************************ 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71403 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71403' 00:12:58.038 Process raid pid: 71403 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71403 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71403 ']' 00:12:58.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.038 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.038 [2024-12-05 20:05:59.287296] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:12:58.039 [2024-12-05 20:05:59.287411] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.297 [2024-12-05 20:05:59.463305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.297 [2024-12-05 20:05:59.580624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.557 [2024-12-05 20:05:59.788704] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.557 [2024-12-05 20:05:59.788795] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.817 [2024-12-05 20:06:00.135370] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:58.817 [2024-12-05 20:06:00.135431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:58.817 [2024-12-05 20:06:00.135442] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:58.817 [2024-12-05 20:06:00.135452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:58.817 [2024-12-05 20:06:00.135458] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:58.817 [2024-12-05 20:06:00.135467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:58.817 [2024-12-05 20:06:00.135474] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:58.817 [2024-12-05 20:06:00.135482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.817 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.817 "name": "Existed_Raid", 00:12:58.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.817 "strip_size_kb": 64, 00:12:58.817 "state": "configuring", 00:12:58.817 "raid_level": "concat", 00:12:58.818 "superblock": false, 00:12:58.818 "num_base_bdevs": 4, 00:12:58.818 "num_base_bdevs_discovered": 0, 00:12:58.818 "num_base_bdevs_operational": 4, 00:12:58.818 "base_bdevs_list": [ 00:12:58.818 { 00:12:58.818 "name": "BaseBdev1", 00:12:58.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.818 "is_configured": false, 00:12:58.818 "data_offset": 0, 00:12:58.818 "data_size": 0 00:12:58.818 }, 00:12:58.818 { 00:12:58.818 "name": "BaseBdev2", 00:12:58.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.818 "is_configured": false, 00:12:58.818 "data_offset": 0, 00:12:58.818 "data_size": 0 00:12:58.818 }, 00:12:58.818 { 00:12:58.818 "name": "BaseBdev3", 00:12:58.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.818 "is_configured": false, 00:12:58.818 "data_offset": 0, 00:12:58.818 "data_size": 0 00:12:58.818 }, 00:12:58.818 { 00:12:58.818 "name": "BaseBdev4", 00:12:58.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.818 "is_configured": false, 00:12:58.818 "data_offset": 0, 00:12:58.818 "data_size": 0 00:12:58.818 } 00:12:58.818 ] 00:12:58.818 }' 00:12:58.818 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.818 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.387 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:59.387 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.387 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.387 [2024-12-05 20:06:00.526653] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:59.387 [2024-12-05 20:06:00.526760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:59.387 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.388 [2024-12-05 20:06:00.534635] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:59.388 [2024-12-05 20:06:00.534718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:59.388 [2024-12-05 20:06:00.534747] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:59.388 [2024-12-05 20:06:00.534771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:59.388 [2024-12-05 20:06:00.534790] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:59.388 [2024-12-05 20:06:00.534813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:59.388 [2024-12-05 20:06:00.534838] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:59.388 [2024-12-05 20:06:00.534892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.388 [2024-12-05 20:06:00.577991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.388 BaseBdev1 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.388 [ 00:12:59.388 { 00:12:59.388 "name": "BaseBdev1", 00:12:59.388 "aliases": [ 00:12:59.388 "36eb5d13-8d1f-4508-abba-5e27e3f3cf10" 00:12:59.388 ], 00:12:59.388 "product_name": "Malloc disk", 00:12:59.388 "block_size": 512, 00:12:59.388 "num_blocks": 65536, 00:12:59.388 "uuid": "36eb5d13-8d1f-4508-abba-5e27e3f3cf10", 00:12:59.388 "assigned_rate_limits": { 00:12:59.388 "rw_ios_per_sec": 0, 00:12:59.388 "rw_mbytes_per_sec": 0, 00:12:59.388 "r_mbytes_per_sec": 0, 00:12:59.388 "w_mbytes_per_sec": 0 00:12:59.388 }, 00:12:59.388 "claimed": true, 00:12:59.388 "claim_type": "exclusive_write", 00:12:59.388 "zoned": false, 00:12:59.388 "supported_io_types": { 00:12:59.388 "read": true, 00:12:59.388 "write": true, 00:12:59.388 "unmap": true, 00:12:59.388 "flush": true, 00:12:59.388 "reset": true, 00:12:59.388 "nvme_admin": false, 00:12:59.388 "nvme_io": false, 00:12:59.388 "nvme_io_md": false, 00:12:59.388 "write_zeroes": true, 00:12:59.388 "zcopy": true, 00:12:59.388 "get_zone_info": false, 00:12:59.388 "zone_management": false, 00:12:59.388 "zone_append": false, 00:12:59.388 "compare": false, 00:12:59.388 "compare_and_write": false, 00:12:59.388 "abort": true, 00:12:59.388 "seek_hole": false, 00:12:59.388 "seek_data": false, 00:12:59.388 "copy": true, 00:12:59.388 "nvme_iov_md": false 00:12:59.388 }, 00:12:59.388 "memory_domains": [ 00:12:59.388 { 00:12:59.388 "dma_device_id": "system", 00:12:59.388 "dma_device_type": 1 00:12:59.388 }, 00:12:59.388 { 00:12:59.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.388 "dma_device_type": 2 00:12:59.388 } 00:12:59.388 ], 00:12:59.388 "driver_specific": {} 00:12:59.388 } 00:12:59.388 ] 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.388 "name": "Existed_Raid", 00:12:59.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.388 "strip_size_kb": 64, 00:12:59.388 "state": "configuring", 00:12:59.388 "raid_level": "concat", 00:12:59.388 "superblock": false, 00:12:59.388 "num_base_bdevs": 4, 00:12:59.388 "num_base_bdevs_discovered": 1, 00:12:59.388 "num_base_bdevs_operational": 4, 00:12:59.388 "base_bdevs_list": [ 00:12:59.388 { 00:12:59.388 "name": "BaseBdev1", 00:12:59.388 "uuid": "36eb5d13-8d1f-4508-abba-5e27e3f3cf10", 00:12:59.388 "is_configured": true, 00:12:59.388 "data_offset": 0, 00:12:59.388 "data_size": 65536 00:12:59.388 }, 00:12:59.388 { 00:12:59.388 "name": "BaseBdev2", 00:12:59.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.388 "is_configured": false, 00:12:59.388 "data_offset": 0, 00:12:59.388 "data_size": 0 00:12:59.388 }, 00:12:59.388 { 00:12:59.388 "name": "BaseBdev3", 00:12:59.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.388 "is_configured": false, 00:12:59.388 "data_offset": 0, 00:12:59.388 "data_size": 0 00:12:59.388 }, 00:12:59.388 { 00:12:59.388 "name": "BaseBdev4", 00:12:59.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.388 "is_configured": false, 00:12:59.388 "data_offset": 0, 00:12:59.388 "data_size": 0 00:12:59.388 } 00:12:59.388 ] 00:12:59.388 }' 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.388 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.647 [2024-12-05 20:06:01.053240] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:59.647 [2024-12-05 20:06:01.053365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.647 [2024-12-05 20:06:01.065278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.647 [2024-12-05 20:06:01.067177] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:59.647 [2024-12-05 20:06:01.067223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:59.647 [2024-12-05 20:06:01.067234] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:59.647 [2024-12-05 20:06:01.067244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:59.647 [2024-12-05 20:06:01.067251] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:59.647 [2024-12-05 20:06:01.067260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.647 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.906 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.906 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.906 "name": "Existed_Raid", 00:12:59.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.906 "strip_size_kb": 64, 00:12:59.906 "state": "configuring", 00:12:59.906 "raid_level": "concat", 00:12:59.906 "superblock": false, 00:12:59.906 "num_base_bdevs": 4, 00:12:59.906 "num_base_bdevs_discovered": 1, 00:12:59.906 "num_base_bdevs_operational": 4, 00:12:59.906 "base_bdevs_list": [ 00:12:59.906 { 00:12:59.906 "name": "BaseBdev1", 00:12:59.906 "uuid": "36eb5d13-8d1f-4508-abba-5e27e3f3cf10", 00:12:59.906 "is_configured": true, 00:12:59.906 "data_offset": 0, 00:12:59.906 "data_size": 65536 00:12:59.906 }, 00:12:59.906 { 00:12:59.906 "name": "BaseBdev2", 00:12:59.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.906 "is_configured": false, 00:12:59.906 "data_offset": 0, 00:12:59.906 "data_size": 0 00:12:59.906 }, 00:12:59.906 { 00:12:59.906 "name": "BaseBdev3", 00:12:59.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.906 "is_configured": false, 00:12:59.906 "data_offset": 0, 00:12:59.906 "data_size": 0 00:12:59.906 }, 00:12:59.906 { 00:12:59.906 "name": "BaseBdev4", 00:12:59.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.906 "is_configured": false, 00:12:59.906 "data_offset": 0, 00:12:59.906 "data_size": 0 00:12:59.906 } 00:12:59.906 ] 00:12:59.906 }' 00:12:59.906 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.906 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.165 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:00.165 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.165 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.165 [2024-12-05 20:06:01.510168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:00.165 BaseBdev2 00:13:00.165 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.165 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:00.165 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:00.165 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:00.165 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:00.165 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:00.165 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:00.165 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:00.165 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.165 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.165 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.165 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:00.165 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.165 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.165 [ 00:13:00.165 { 00:13:00.165 "name": "BaseBdev2", 00:13:00.165 "aliases": [ 00:13:00.165 "28268f94-c4f5-472f-964f-43f61a912703" 00:13:00.165 ], 00:13:00.165 "product_name": "Malloc disk", 00:13:00.165 "block_size": 512, 00:13:00.165 "num_blocks": 65536, 00:13:00.165 "uuid": "28268f94-c4f5-472f-964f-43f61a912703", 00:13:00.165 "assigned_rate_limits": { 00:13:00.165 "rw_ios_per_sec": 0, 00:13:00.165 "rw_mbytes_per_sec": 0, 00:13:00.165 "r_mbytes_per_sec": 0, 00:13:00.165 "w_mbytes_per_sec": 0 00:13:00.165 }, 00:13:00.165 "claimed": true, 00:13:00.165 "claim_type": "exclusive_write", 00:13:00.165 "zoned": false, 00:13:00.165 "supported_io_types": { 00:13:00.165 "read": true, 00:13:00.165 "write": true, 00:13:00.165 "unmap": true, 00:13:00.165 "flush": true, 00:13:00.165 "reset": true, 00:13:00.165 "nvme_admin": false, 00:13:00.165 "nvme_io": false, 00:13:00.165 "nvme_io_md": false, 00:13:00.165 "write_zeroes": true, 00:13:00.165 "zcopy": true, 00:13:00.165 "get_zone_info": false, 00:13:00.165 "zone_management": false, 00:13:00.165 "zone_append": false, 00:13:00.165 "compare": false, 00:13:00.165 "compare_and_write": false, 00:13:00.165 "abort": true, 00:13:00.165 "seek_hole": false, 00:13:00.165 "seek_data": false, 00:13:00.165 "copy": true, 00:13:00.165 "nvme_iov_md": false 00:13:00.165 }, 00:13:00.165 "memory_domains": [ 00:13:00.165 { 00:13:00.165 "dma_device_id": "system", 00:13:00.165 "dma_device_type": 1 00:13:00.165 }, 00:13:00.165 { 00:13:00.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.165 "dma_device_type": 2 00:13:00.165 } 00:13:00.165 ], 00:13:00.165 "driver_specific": {} 00:13:00.165 } 00:13:00.166 ] 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.166 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.424 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.424 "name": "Existed_Raid", 00:13:00.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.424 "strip_size_kb": 64, 00:13:00.424 "state": "configuring", 00:13:00.424 "raid_level": "concat", 00:13:00.424 "superblock": false, 00:13:00.424 "num_base_bdevs": 4, 00:13:00.424 "num_base_bdevs_discovered": 2, 00:13:00.424 "num_base_bdevs_operational": 4, 00:13:00.424 "base_bdevs_list": [ 00:13:00.424 { 00:13:00.424 "name": "BaseBdev1", 00:13:00.424 "uuid": "36eb5d13-8d1f-4508-abba-5e27e3f3cf10", 00:13:00.424 "is_configured": true, 00:13:00.424 "data_offset": 0, 00:13:00.424 "data_size": 65536 00:13:00.424 }, 00:13:00.424 { 00:13:00.424 "name": "BaseBdev2", 00:13:00.424 "uuid": "28268f94-c4f5-472f-964f-43f61a912703", 00:13:00.424 "is_configured": true, 00:13:00.424 "data_offset": 0, 00:13:00.424 "data_size": 65536 00:13:00.424 }, 00:13:00.424 { 00:13:00.424 "name": "BaseBdev3", 00:13:00.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.424 "is_configured": false, 00:13:00.424 "data_offset": 0, 00:13:00.424 "data_size": 0 00:13:00.424 }, 00:13:00.424 { 00:13:00.424 "name": "BaseBdev4", 00:13:00.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.424 "is_configured": false, 00:13:00.424 "data_offset": 0, 00:13:00.424 "data_size": 0 00:13:00.424 } 00:13:00.424 ] 00:13:00.424 }' 00:13:00.424 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.424 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.683 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:00.683 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.683 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.683 [2024-12-05 20:06:02.036755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:00.683 BaseBdev3 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.683 [ 00:13:00.683 { 00:13:00.683 "name": "BaseBdev3", 00:13:00.683 "aliases": [ 00:13:00.683 "53ed3119-f137-48f8-a72b-15edd466d96e" 00:13:00.683 ], 00:13:00.683 "product_name": "Malloc disk", 00:13:00.683 "block_size": 512, 00:13:00.683 "num_blocks": 65536, 00:13:00.683 "uuid": "53ed3119-f137-48f8-a72b-15edd466d96e", 00:13:00.683 "assigned_rate_limits": { 00:13:00.683 "rw_ios_per_sec": 0, 00:13:00.683 "rw_mbytes_per_sec": 0, 00:13:00.683 "r_mbytes_per_sec": 0, 00:13:00.683 "w_mbytes_per_sec": 0 00:13:00.683 }, 00:13:00.683 "claimed": true, 00:13:00.683 "claim_type": "exclusive_write", 00:13:00.683 "zoned": false, 00:13:00.683 "supported_io_types": { 00:13:00.683 "read": true, 00:13:00.683 "write": true, 00:13:00.683 "unmap": true, 00:13:00.683 "flush": true, 00:13:00.683 "reset": true, 00:13:00.683 "nvme_admin": false, 00:13:00.683 "nvme_io": false, 00:13:00.683 "nvme_io_md": false, 00:13:00.683 "write_zeroes": true, 00:13:00.683 "zcopy": true, 00:13:00.683 "get_zone_info": false, 00:13:00.683 "zone_management": false, 00:13:00.683 "zone_append": false, 00:13:00.683 "compare": false, 00:13:00.683 "compare_and_write": false, 00:13:00.683 "abort": true, 00:13:00.683 "seek_hole": false, 00:13:00.683 "seek_data": false, 00:13:00.683 "copy": true, 00:13:00.683 "nvme_iov_md": false 00:13:00.683 }, 00:13:00.683 "memory_domains": [ 00:13:00.683 { 00:13:00.683 "dma_device_id": "system", 00:13:00.683 "dma_device_type": 1 00:13:00.683 }, 00:13:00.683 { 00:13:00.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.683 "dma_device_type": 2 00:13:00.683 } 00:13:00.683 ], 00:13:00.683 "driver_specific": {} 00:13:00.683 } 00:13:00.683 ] 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.683 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.942 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.942 "name": "Existed_Raid", 00:13:00.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.942 "strip_size_kb": 64, 00:13:00.942 "state": "configuring", 00:13:00.942 "raid_level": "concat", 00:13:00.942 "superblock": false, 00:13:00.942 "num_base_bdevs": 4, 00:13:00.942 "num_base_bdevs_discovered": 3, 00:13:00.942 "num_base_bdevs_operational": 4, 00:13:00.942 "base_bdevs_list": [ 00:13:00.942 { 00:13:00.942 "name": "BaseBdev1", 00:13:00.942 "uuid": "36eb5d13-8d1f-4508-abba-5e27e3f3cf10", 00:13:00.942 "is_configured": true, 00:13:00.942 "data_offset": 0, 00:13:00.942 "data_size": 65536 00:13:00.942 }, 00:13:00.942 { 00:13:00.942 "name": "BaseBdev2", 00:13:00.942 "uuid": "28268f94-c4f5-472f-964f-43f61a912703", 00:13:00.942 "is_configured": true, 00:13:00.942 "data_offset": 0, 00:13:00.942 "data_size": 65536 00:13:00.942 }, 00:13:00.942 { 00:13:00.942 "name": "BaseBdev3", 00:13:00.942 "uuid": "53ed3119-f137-48f8-a72b-15edd466d96e", 00:13:00.942 "is_configured": true, 00:13:00.943 "data_offset": 0, 00:13:00.943 "data_size": 65536 00:13:00.943 }, 00:13:00.943 { 00:13:00.943 "name": "BaseBdev4", 00:13:00.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.943 "is_configured": false, 00:13:00.943 "data_offset": 0, 00:13:00.943 "data_size": 0 00:13:00.943 } 00:13:00.943 ] 00:13:00.943 }' 00:13:00.943 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.943 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.203 [2024-12-05 20:06:02.574309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:01.203 [2024-12-05 20:06:02.574358] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:01.203 [2024-12-05 20:06:02.574368] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:01.203 [2024-12-05 20:06:02.574650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:01.203 [2024-12-05 20:06:02.574817] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:01.203 [2024-12-05 20:06:02.574828] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:01.203 [2024-12-05 20:06:02.575152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.203 BaseBdev4 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.203 [ 00:13:01.203 { 00:13:01.203 "name": "BaseBdev4", 00:13:01.203 "aliases": [ 00:13:01.203 "b989d501-b529-4085-a7c1-ffed043b92fa" 00:13:01.203 ], 00:13:01.203 "product_name": "Malloc disk", 00:13:01.203 "block_size": 512, 00:13:01.203 "num_blocks": 65536, 00:13:01.203 "uuid": "b989d501-b529-4085-a7c1-ffed043b92fa", 00:13:01.203 "assigned_rate_limits": { 00:13:01.203 "rw_ios_per_sec": 0, 00:13:01.203 "rw_mbytes_per_sec": 0, 00:13:01.203 "r_mbytes_per_sec": 0, 00:13:01.203 "w_mbytes_per_sec": 0 00:13:01.203 }, 00:13:01.203 "claimed": true, 00:13:01.203 "claim_type": "exclusive_write", 00:13:01.203 "zoned": false, 00:13:01.203 "supported_io_types": { 00:13:01.203 "read": true, 00:13:01.203 "write": true, 00:13:01.203 "unmap": true, 00:13:01.203 "flush": true, 00:13:01.203 "reset": true, 00:13:01.203 "nvme_admin": false, 00:13:01.203 "nvme_io": false, 00:13:01.203 "nvme_io_md": false, 00:13:01.203 "write_zeroes": true, 00:13:01.203 "zcopy": true, 00:13:01.203 "get_zone_info": false, 00:13:01.203 "zone_management": false, 00:13:01.203 "zone_append": false, 00:13:01.203 "compare": false, 00:13:01.203 "compare_and_write": false, 00:13:01.203 "abort": true, 00:13:01.203 "seek_hole": false, 00:13:01.203 "seek_data": false, 00:13:01.203 "copy": true, 00:13:01.203 "nvme_iov_md": false 00:13:01.203 }, 00:13:01.203 "memory_domains": [ 00:13:01.203 { 00:13:01.203 "dma_device_id": "system", 00:13:01.203 "dma_device_type": 1 00:13:01.203 }, 00:13:01.203 { 00:13:01.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.203 "dma_device_type": 2 00:13:01.203 } 00:13:01.203 ], 00:13:01.203 "driver_specific": {} 00:13:01.203 } 00:13:01.203 ] 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.203 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.204 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.204 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.463 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.463 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.463 "name": "Existed_Raid", 00:13:01.463 "uuid": "3f3733ba-1468-42da-b6e1-a88b141520e1", 00:13:01.463 "strip_size_kb": 64, 00:13:01.463 "state": "online", 00:13:01.463 "raid_level": "concat", 00:13:01.463 "superblock": false, 00:13:01.463 "num_base_bdevs": 4, 00:13:01.463 "num_base_bdevs_discovered": 4, 00:13:01.463 "num_base_bdevs_operational": 4, 00:13:01.463 "base_bdevs_list": [ 00:13:01.463 { 00:13:01.463 "name": "BaseBdev1", 00:13:01.463 "uuid": "36eb5d13-8d1f-4508-abba-5e27e3f3cf10", 00:13:01.463 "is_configured": true, 00:13:01.463 "data_offset": 0, 00:13:01.463 "data_size": 65536 00:13:01.463 }, 00:13:01.463 { 00:13:01.463 "name": "BaseBdev2", 00:13:01.463 "uuid": "28268f94-c4f5-472f-964f-43f61a912703", 00:13:01.463 "is_configured": true, 00:13:01.463 "data_offset": 0, 00:13:01.463 "data_size": 65536 00:13:01.463 }, 00:13:01.463 { 00:13:01.463 "name": "BaseBdev3", 00:13:01.463 "uuid": "53ed3119-f137-48f8-a72b-15edd466d96e", 00:13:01.463 "is_configured": true, 00:13:01.463 "data_offset": 0, 00:13:01.463 "data_size": 65536 00:13:01.463 }, 00:13:01.463 { 00:13:01.463 "name": "BaseBdev4", 00:13:01.463 "uuid": "b989d501-b529-4085-a7c1-ffed043b92fa", 00:13:01.463 "is_configured": true, 00:13:01.463 "data_offset": 0, 00:13:01.463 "data_size": 65536 00:13:01.463 } 00:13:01.463 ] 00:13:01.463 }' 00:13:01.463 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.463 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.723 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:01.723 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:01.723 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:01.723 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:01.723 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:01.723 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:01.723 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:01.723 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:01.723 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.723 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.723 [2024-12-05 20:06:03.073877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.723 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.723 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:01.723 "name": "Existed_Raid", 00:13:01.723 "aliases": [ 00:13:01.723 "3f3733ba-1468-42da-b6e1-a88b141520e1" 00:13:01.723 ], 00:13:01.723 "product_name": "Raid Volume", 00:13:01.723 "block_size": 512, 00:13:01.723 "num_blocks": 262144, 00:13:01.723 "uuid": "3f3733ba-1468-42da-b6e1-a88b141520e1", 00:13:01.723 "assigned_rate_limits": { 00:13:01.723 "rw_ios_per_sec": 0, 00:13:01.723 "rw_mbytes_per_sec": 0, 00:13:01.723 "r_mbytes_per_sec": 0, 00:13:01.723 "w_mbytes_per_sec": 0 00:13:01.723 }, 00:13:01.723 "claimed": false, 00:13:01.723 "zoned": false, 00:13:01.723 "supported_io_types": { 00:13:01.723 "read": true, 00:13:01.723 "write": true, 00:13:01.723 "unmap": true, 00:13:01.723 "flush": true, 00:13:01.723 "reset": true, 00:13:01.723 "nvme_admin": false, 00:13:01.723 "nvme_io": false, 00:13:01.723 "nvme_io_md": false, 00:13:01.723 "write_zeroes": true, 00:13:01.723 "zcopy": false, 00:13:01.723 "get_zone_info": false, 00:13:01.723 "zone_management": false, 00:13:01.723 "zone_append": false, 00:13:01.723 "compare": false, 00:13:01.723 "compare_and_write": false, 00:13:01.723 "abort": false, 00:13:01.723 "seek_hole": false, 00:13:01.723 "seek_data": false, 00:13:01.723 "copy": false, 00:13:01.723 "nvme_iov_md": false 00:13:01.723 }, 00:13:01.723 "memory_domains": [ 00:13:01.723 { 00:13:01.723 "dma_device_id": "system", 00:13:01.723 "dma_device_type": 1 00:13:01.723 }, 00:13:01.723 { 00:13:01.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.723 "dma_device_type": 2 00:13:01.723 }, 00:13:01.723 { 00:13:01.723 "dma_device_id": "system", 00:13:01.723 "dma_device_type": 1 00:13:01.723 }, 00:13:01.723 { 00:13:01.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.723 "dma_device_type": 2 00:13:01.723 }, 00:13:01.723 { 00:13:01.723 "dma_device_id": "system", 00:13:01.723 "dma_device_type": 1 00:13:01.723 }, 00:13:01.723 { 00:13:01.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.723 "dma_device_type": 2 00:13:01.723 }, 00:13:01.723 { 00:13:01.723 "dma_device_id": "system", 00:13:01.723 "dma_device_type": 1 00:13:01.723 }, 00:13:01.723 { 00:13:01.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.723 "dma_device_type": 2 00:13:01.723 } 00:13:01.723 ], 00:13:01.723 "driver_specific": { 00:13:01.723 "raid": { 00:13:01.723 "uuid": "3f3733ba-1468-42da-b6e1-a88b141520e1", 00:13:01.723 "strip_size_kb": 64, 00:13:01.723 "state": "online", 00:13:01.723 "raid_level": "concat", 00:13:01.723 "superblock": false, 00:13:01.723 "num_base_bdevs": 4, 00:13:01.723 "num_base_bdevs_discovered": 4, 00:13:01.723 "num_base_bdevs_operational": 4, 00:13:01.723 "base_bdevs_list": [ 00:13:01.723 { 00:13:01.723 "name": "BaseBdev1", 00:13:01.723 "uuid": "36eb5d13-8d1f-4508-abba-5e27e3f3cf10", 00:13:01.723 "is_configured": true, 00:13:01.723 "data_offset": 0, 00:13:01.723 "data_size": 65536 00:13:01.723 }, 00:13:01.723 { 00:13:01.723 "name": "BaseBdev2", 00:13:01.723 "uuid": "28268f94-c4f5-472f-964f-43f61a912703", 00:13:01.723 "is_configured": true, 00:13:01.723 "data_offset": 0, 00:13:01.723 "data_size": 65536 00:13:01.723 }, 00:13:01.723 { 00:13:01.723 "name": "BaseBdev3", 00:13:01.723 "uuid": "53ed3119-f137-48f8-a72b-15edd466d96e", 00:13:01.723 "is_configured": true, 00:13:01.723 "data_offset": 0, 00:13:01.723 "data_size": 65536 00:13:01.723 }, 00:13:01.723 { 00:13:01.723 "name": "BaseBdev4", 00:13:01.723 "uuid": "b989d501-b529-4085-a7c1-ffed043b92fa", 00:13:01.723 "is_configured": true, 00:13:01.723 "data_offset": 0, 00:13:01.723 "data_size": 65536 00:13:01.723 } 00:13:01.723 ] 00:13:01.723 } 00:13:01.723 } 00:13:01.723 }' 00:13:01.723 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:01.723 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:01.723 BaseBdev2 00:13:01.723 BaseBdev3 00:13:01.723 BaseBdev4' 00:13:01.723 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.983 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:01.984 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.984 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.984 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.984 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.984 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.984 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:01.984 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.984 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.984 [2024-12-05 20:06:03.361107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:01.984 [2024-12-05 20:06:03.361138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.984 [2024-12-05 20:06:03.361194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.244 "name": "Existed_Raid", 00:13:02.244 "uuid": "3f3733ba-1468-42da-b6e1-a88b141520e1", 00:13:02.244 "strip_size_kb": 64, 00:13:02.244 "state": "offline", 00:13:02.244 "raid_level": "concat", 00:13:02.244 "superblock": false, 00:13:02.244 "num_base_bdevs": 4, 00:13:02.244 "num_base_bdevs_discovered": 3, 00:13:02.244 "num_base_bdevs_operational": 3, 00:13:02.244 "base_bdevs_list": [ 00:13:02.244 { 00:13:02.244 "name": null, 00:13:02.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.244 "is_configured": false, 00:13:02.244 "data_offset": 0, 00:13:02.244 "data_size": 65536 00:13:02.244 }, 00:13:02.244 { 00:13:02.244 "name": "BaseBdev2", 00:13:02.244 "uuid": "28268f94-c4f5-472f-964f-43f61a912703", 00:13:02.244 "is_configured": true, 00:13:02.244 "data_offset": 0, 00:13:02.244 "data_size": 65536 00:13:02.244 }, 00:13:02.244 { 00:13:02.244 "name": "BaseBdev3", 00:13:02.244 "uuid": "53ed3119-f137-48f8-a72b-15edd466d96e", 00:13:02.244 "is_configured": true, 00:13:02.244 "data_offset": 0, 00:13:02.244 "data_size": 65536 00:13:02.244 }, 00:13:02.244 { 00:13:02.244 "name": "BaseBdev4", 00:13:02.244 "uuid": "b989d501-b529-4085-a7c1-ffed043b92fa", 00:13:02.244 "is_configured": true, 00:13:02.244 "data_offset": 0, 00:13:02.244 "data_size": 65536 00:13:02.244 } 00:13:02.244 ] 00:13:02.244 }' 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.244 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.504 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:02.504 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:02.504 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:02.504 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.504 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.504 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.763 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.763 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:02.763 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:02.763 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:02.763 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.763 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.763 [2024-12-05 20:06:03.973954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:02.763 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.763 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:02.763 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:02.763 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.763 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.763 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.764 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:02.764 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.764 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:02.764 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:02.764 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:02.764 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.764 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.764 [2024-12-05 20:06:04.129066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:03.023 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.024 [2024-12-05 20:06:04.279827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:03.024 [2024-12-05 20:06:04.279887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.024 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.284 BaseBdev2 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.284 [ 00:13:03.284 { 00:13:03.284 "name": "BaseBdev2", 00:13:03.284 "aliases": [ 00:13:03.284 "ea5e8c47-ab1c-410e-ae3b-66928d263776" 00:13:03.284 ], 00:13:03.284 "product_name": "Malloc disk", 00:13:03.284 "block_size": 512, 00:13:03.284 "num_blocks": 65536, 00:13:03.284 "uuid": "ea5e8c47-ab1c-410e-ae3b-66928d263776", 00:13:03.284 "assigned_rate_limits": { 00:13:03.284 "rw_ios_per_sec": 0, 00:13:03.284 "rw_mbytes_per_sec": 0, 00:13:03.284 "r_mbytes_per_sec": 0, 00:13:03.284 "w_mbytes_per_sec": 0 00:13:03.284 }, 00:13:03.284 "claimed": false, 00:13:03.284 "zoned": false, 00:13:03.284 "supported_io_types": { 00:13:03.284 "read": true, 00:13:03.284 "write": true, 00:13:03.284 "unmap": true, 00:13:03.284 "flush": true, 00:13:03.284 "reset": true, 00:13:03.284 "nvme_admin": false, 00:13:03.284 "nvme_io": false, 00:13:03.284 "nvme_io_md": false, 00:13:03.284 "write_zeroes": true, 00:13:03.284 "zcopy": true, 00:13:03.284 "get_zone_info": false, 00:13:03.284 "zone_management": false, 00:13:03.284 "zone_append": false, 00:13:03.284 "compare": false, 00:13:03.284 "compare_and_write": false, 00:13:03.284 "abort": true, 00:13:03.284 "seek_hole": false, 00:13:03.284 "seek_data": false, 00:13:03.284 "copy": true, 00:13:03.284 "nvme_iov_md": false 00:13:03.284 }, 00:13:03.284 "memory_domains": [ 00:13:03.284 { 00:13:03.284 "dma_device_id": "system", 00:13:03.284 "dma_device_type": 1 00:13:03.284 }, 00:13:03.284 { 00:13:03.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.284 "dma_device_type": 2 00:13:03.284 } 00:13:03.284 ], 00:13:03.284 "driver_specific": {} 00:13:03.284 } 00:13:03.284 ] 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.284 BaseBdev3 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.284 [ 00:13:03.284 { 00:13:03.284 "name": "BaseBdev3", 00:13:03.284 "aliases": [ 00:13:03.284 "9ed45c74-e619-4654-9146-439420bb851d" 00:13:03.284 ], 00:13:03.284 "product_name": "Malloc disk", 00:13:03.284 "block_size": 512, 00:13:03.284 "num_blocks": 65536, 00:13:03.284 "uuid": "9ed45c74-e619-4654-9146-439420bb851d", 00:13:03.284 "assigned_rate_limits": { 00:13:03.284 "rw_ios_per_sec": 0, 00:13:03.284 "rw_mbytes_per_sec": 0, 00:13:03.284 "r_mbytes_per_sec": 0, 00:13:03.284 "w_mbytes_per_sec": 0 00:13:03.284 }, 00:13:03.284 "claimed": false, 00:13:03.284 "zoned": false, 00:13:03.284 "supported_io_types": { 00:13:03.284 "read": true, 00:13:03.284 "write": true, 00:13:03.284 "unmap": true, 00:13:03.284 "flush": true, 00:13:03.284 "reset": true, 00:13:03.284 "nvme_admin": false, 00:13:03.284 "nvme_io": false, 00:13:03.284 "nvme_io_md": false, 00:13:03.284 "write_zeroes": true, 00:13:03.284 "zcopy": true, 00:13:03.284 "get_zone_info": false, 00:13:03.284 "zone_management": false, 00:13:03.284 "zone_append": false, 00:13:03.284 "compare": false, 00:13:03.284 "compare_and_write": false, 00:13:03.284 "abort": true, 00:13:03.284 "seek_hole": false, 00:13:03.284 "seek_data": false, 00:13:03.284 "copy": true, 00:13:03.284 "nvme_iov_md": false 00:13:03.284 }, 00:13:03.284 "memory_domains": [ 00:13:03.284 { 00:13:03.284 "dma_device_id": "system", 00:13:03.284 "dma_device_type": 1 00:13:03.284 }, 00:13:03.284 { 00:13:03.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.284 "dma_device_type": 2 00:13:03.284 } 00:13:03.284 ], 00:13:03.284 "driver_specific": {} 00:13:03.284 } 00:13:03.284 ] 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.284 BaseBdev4 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:03.284 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.285 [ 00:13:03.285 { 00:13:03.285 "name": "BaseBdev4", 00:13:03.285 "aliases": [ 00:13:03.285 "9400bd54-50cd-416c-be65-ec3fb101f8e3" 00:13:03.285 ], 00:13:03.285 "product_name": "Malloc disk", 00:13:03.285 "block_size": 512, 00:13:03.285 "num_blocks": 65536, 00:13:03.285 "uuid": "9400bd54-50cd-416c-be65-ec3fb101f8e3", 00:13:03.285 "assigned_rate_limits": { 00:13:03.285 "rw_ios_per_sec": 0, 00:13:03.285 "rw_mbytes_per_sec": 0, 00:13:03.285 "r_mbytes_per_sec": 0, 00:13:03.285 "w_mbytes_per_sec": 0 00:13:03.285 }, 00:13:03.285 "claimed": false, 00:13:03.285 "zoned": false, 00:13:03.285 "supported_io_types": { 00:13:03.285 "read": true, 00:13:03.285 "write": true, 00:13:03.285 "unmap": true, 00:13:03.285 "flush": true, 00:13:03.285 "reset": true, 00:13:03.285 "nvme_admin": false, 00:13:03.285 "nvme_io": false, 00:13:03.285 "nvme_io_md": false, 00:13:03.285 "write_zeroes": true, 00:13:03.285 "zcopy": true, 00:13:03.285 "get_zone_info": false, 00:13:03.285 "zone_management": false, 00:13:03.285 "zone_append": false, 00:13:03.285 "compare": false, 00:13:03.285 "compare_and_write": false, 00:13:03.285 "abort": true, 00:13:03.285 "seek_hole": false, 00:13:03.285 "seek_data": false, 00:13:03.285 "copy": true, 00:13:03.285 "nvme_iov_md": false 00:13:03.285 }, 00:13:03.285 "memory_domains": [ 00:13:03.285 { 00:13:03.285 "dma_device_id": "system", 00:13:03.285 "dma_device_type": 1 00:13:03.285 }, 00:13:03.285 { 00:13:03.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.285 "dma_device_type": 2 00:13:03.285 } 00:13:03.285 ], 00:13:03.285 "driver_specific": {} 00:13:03.285 } 00:13:03.285 ] 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.285 [2024-12-05 20:06:04.676468] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:03.285 [2024-12-05 20:06:04.676576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:03.285 [2024-12-05 20:06:04.676628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.285 [2024-12-05 20:06:04.678578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:03.285 [2024-12-05 20:06:04.678676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.285 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.573 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.573 "name": "Existed_Raid", 00:13:03.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.573 "strip_size_kb": 64, 00:13:03.573 "state": "configuring", 00:13:03.573 "raid_level": "concat", 00:13:03.573 "superblock": false, 00:13:03.573 "num_base_bdevs": 4, 00:13:03.573 "num_base_bdevs_discovered": 3, 00:13:03.573 "num_base_bdevs_operational": 4, 00:13:03.573 "base_bdevs_list": [ 00:13:03.573 { 00:13:03.573 "name": "BaseBdev1", 00:13:03.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.573 "is_configured": false, 00:13:03.573 "data_offset": 0, 00:13:03.573 "data_size": 0 00:13:03.573 }, 00:13:03.573 { 00:13:03.573 "name": "BaseBdev2", 00:13:03.573 "uuid": "ea5e8c47-ab1c-410e-ae3b-66928d263776", 00:13:03.573 "is_configured": true, 00:13:03.573 "data_offset": 0, 00:13:03.573 "data_size": 65536 00:13:03.573 }, 00:13:03.573 { 00:13:03.573 "name": "BaseBdev3", 00:13:03.573 "uuid": "9ed45c74-e619-4654-9146-439420bb851d", 00:13:03.573 "is_configured": true, 00:13:03.573 "data_offset": 0, 00:13:03.573 "data_size": 65536 00:13:03.573 }, 00:13:03.573 { 00:13:03.573 "name": "BaseBdev4", 00:13:03.573 "uuid": "9400bd54-50cd-416c-be65-ec3fb101f8e3", 00:13:03.573 "is_configured": true, 00:13:03.573 "data_offset": 0, 00:13:03.573 "data_size": 65536 00:13:03.573 } 00:13:03.573 ] 00:13:03.573 }' 00:13:03.573 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.573 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.834 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:03.834 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.834 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.834 [2024-12-05 20:06:05.135806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:03.834 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.834 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:03.834 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.834 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.835 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:03.835 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.835 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.835 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.835 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.835 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.835 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.835 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.835 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.835 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.835 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.835 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.835 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.835 "name": "Existed_Raid", 00:13:03.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.835 "strip_size_kb": 64, 00:13:03.835 "state": "configuring", 00:13:03.835 "raid_level": "concat", 00:13:03.835 "superblock": false, 00:13:03.835 "num_base_bdevs": 4, 00:13:03.835 "num_base_bdevs_discovered": 2, 00:13:03.835 "num_base_bdevs_operational": 4, 00:13:03.835 "base_bdevs_list": [ 00:13:03.835 { 00:13:03.835 "name": "BaseBdev1", 00:13:03.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.835 "is_configured": false, 00:13:03.835 "data_offset": 0, 00:13:03.836 "data_size": 0 00:13:03.836 }, 00:13:03.836 { 00:13:03.836 "name": null, 00:13:03.836 "uuid": "ea5e8c47-ab1c-410e-ae3b-66928d263776", 00:13:03.836 "is_configured": false, 00:13:03.836 "data_offset": 0, 00:13:03.836 "data_size": 65536 00:13:03.836 }, 00:13:03.836 { 00:13:03.836 "name": "BaseBdev3", 00:13:03.836 "uuid": "9ed45c74-e619-4654-9146-439420bb851d", 00:13:03.836 "is_configured": true, 00:13:03.836 "data_offset": 0, 00:13:03.836 "data_size": 65536 00:13:03.836 }, 00:13:03.836 { 00:13:03.836 "name": "BaseBdev4", 00:13:03.836 "uuid": "9400bd54-50cd-416c-be65-ec3fb101f8e3", 00:13:03.836 "is_configured": true, 00:13:03.836 "data_offset": 0, 00:13:03.836 "data_size": 65536 00:13:03.836 } 00:13:03.836 ] 00:13:03.836 }' 00:13:03.836 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.836 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.404 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.404 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.404 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.404 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:04.404 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.404 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:04.404 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:04.404 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.404 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.404 [2024-12-05 20:06:05.725691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.404 BaseBdev1 00:13:04.404 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.404 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:04.404 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:04.404 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:04.404 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:04.404 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:04.404 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.405 [ 00:13:04.405 { 00:13:04.405 "name": "BaseBdev1", 00:13:04.405 "aliases": [ 00:13:04.405 "a630e5b0-4c3e-4a15-bbf0-e143dd329ef6" 00:13:04.405 ], 00:13:04.405 "product_name": "Malloc disk", 00:13:04.405 "block_size": 512, 00:13:04.405 "num_blocks": 65536, 00:13:04.405 "uuid": "a630e5b0-4c3e-4a15-bbf0-e143dd329ef6", 00:13:04.405 "assigned_rate_limits": { 00:13:04.405 "rw_ios_per_sec": 0, 00:13:04.405 "rw_mbytes_per_sec": 0, 00:13:04.405 "r_mbytes_per_sec": 0, 00:13:04.405 "w_mbytes_per_sec": 0 00:13:04.405 }, 00:13:04.405 "claimed": true, 00:13:04.405 "claim_type": "exclusive_write", 00:13:04.405 "zoned": false, 00:13:04.405 "supported_io_types": { 00:13:04.405 "read": true, 00:13:04.405 "write": true, 00:13:04.405 "unmap": true, 00:13:04.405 "flush": true, 00:13:04.405 "reset": true, 00:13:04.405 "nvme_admin": false, 00:13:04.405 "nvme_io": false, 00:13:04.405 "nvme_io_md": false, 00:13:04.405 "write_zeroes": true, 00:13:04.405 "zcopy": true, 00:13:04.405 "get_zone_info": false, 00:13:04.405 "zone_management": false, 00:13:04.405 "zone_append": false, 00:13:04.405 "compare": false, 00:13:04.405 "compare_and_write": false, 00:13:04.405 "abort": true, 00:13:04.405 "seek_hole": false, 00:13:04.405 "seek_data": false, 00:13:04.405 "copy": true, 00:13:04.405 "nvme_iov_md": false 00:13:04.405 }, 00:13:04.405 "memory_domains": [ 00:13:04.405 { 00:13:04.405 "dma_device_id": "system", 00:13:04.405 "dma_device_type": 1 00:13:04.405 }, 00:13:04.405 { 00:13:04.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.405 "dma_device_type": 2 00:13:04.405 } 00:13:04.405 ], 00:13:04.405 "driver_specific": {} 00:13:04.405 } 00:13:04.405 ] 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.405 "name": "Existed_Raid", 00:13:04.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.405 "strip_size_kb": 64, 00:13:04.405 "state": "configuring", 00:13:04.405 "raid_level": "concat", 00:13:04.405 "superblock": false, 00:13:04.405 "num_base_bdevs": 4, 00:13:04.405 "num_base_bdevs_discovered": 3, 00:13:04.405 "num_base_bdevs_operational": 4, 00:13:04.405 "base_bdevs_list": [ 00:13:04.405 { 00:13:04.405 "name": "BaseBdev1", 00:13:04.405 "uuid": "a630e5b0-4c3e-4a15-bbf0-e143dd329ef6", 00:13:04.405 "is_configured": true, 00:13:04.405 "data_offset": 0, 00:13:04.405 "data_size": 65536 00:13:04.405 }, 00:13:04.405 { 00:13:04.405 "name": null, 00:13:04.405 "uuid": "ea5e8c47-ab1c-410e-ae3b-66928d263776", 00:13:04.405 "is_configured": false, 00:13:04.405 "data_offset": 0, 00:13:04.405 "data_size": 65536 00:13:04.405 }, 00:13:04.405 { 00:13:04.405 "name": "BaseBdev3", 00:13:04.405 "uuid": "9ed45c74-e619-4654-9146-439420bb851d", 00:13:04.405 "is_configured": true, 00:13:04.405 "data_offset": 0, 00:13:04.405 "data_size": 65536 00:13:04.405 }, 00:13:04.405 { 00:13:04.405 "name": "BaseBdev4", 00:13:04.405 "uuid": "9400bd54-50cd-416c-be65-ec3fb101f8e3", 00:13:04.405 "is_configured": true, 00:13:04.405 "data_offset": 0, 00:13:04.405 "data_size": 65536 00:13:04.405 } 00:13:04.405 ] 00:13:04.405 }' 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.405 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.974 [2024-12-05 20:06:06.340789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.974 "name": "Existed_Raid", 00:13:04.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.974 "strip_size_kb": 64, 00:13:04.974 "state": "configuring", 00:13:04.974 "raid_level": "concat", 00:13:04.974 "superblock": false, 00:13:04.974 "num_base_bdevs": 4, 00:13:04.974 "num_base_bdevs_discovered": 2, 00:13:04.974 "num_base_bdevs_operational": 4, 00:13:04.974 "base_bdevs_list": [ 00:13:04.974 { 00:13:04.974 "name": "BaseBdev1", 00:13:04.974 "uuid": "a630e5b0-4c3e-4a15-bbf0-e143dd329ef6", 00:13:04.974 "is_configured": true, 00:13:04.974 "data_offset": 0, 00:13:04.974 "data_size": 65536 00:13:04.974 }, 00:13:04.974 { 00:13:04.974 "name": null, 00:13:04.974 "uuid": "ea5e8c47-ab1c-410e-ae3b-66928d263776", 00:13:04.974 "is_configured": false, 00:13:04.974 "data_offset": 0, 00:13:04.974 "data_size": 65536 00:13:04.974 }, 00:13:04.974 { 00:13:04.974 "name": null, 00:13:04.974 "uuid": "9ed45c74-e619-4654-9146-439420bb851d", 00:13:04.974 "is_configured": false, 00:13:04.974 "data_offset": 0, 00:13:04.974 "data_size": 65536 00:13:04.974 }, 00:13:04.974 { 00:13:04.974 "name": "BaseBdev4", 00:13:04.974 "uuid": "9400bd54-50cd-416c-be65-ec3fb101f8e3", 00:13:04.974 "is_configured": true, 00:13:04.974 "data_offset": 0, 00:13:04.974 "data_size": 65536 00:13:04.974 } 00:13:04.974 ] 00:13:04.974 }' 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.974 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.544 [2024-12-05 20:06:06.855934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.544 "name": "Existed_Raid", 00:13:05.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.544 "strip_size_kb": 64, 00:13:05.544 "state": "configuring", 00:13:05.544 "raid_level": "concat", 00:13:05.544 "superblock": false, 00:13:05.544 "num_base_bdevs": 4, 00:13:05.544 "num_base_bdevs_discovered": 3, 00:13:05.544 "num_base_bdevs_operational": 4, 00:13:05.544 "base_bdevs_list": [ 00:13:05.544 { 00:13:05.544 "name": "BaseBdev1", 00:13:05.544 "uuid": "a630e5b0-4c3e-4a15-bbf0-e143dd329ef6", 00:13:05.544 "is_configured": true, 00:13:05.544 "data_offset": 0, 00:13:05.544 "data_size": 65536 00:13:05.544 }, 00:13:05.544 { 00:13:05.544 "name": null, 00:13:05.544 "uuid": "ea5e8c47-ab1c-410e-ae3b-66928d263776", 00:13:05.544 "is_configured": false, 00:13:05.544 "data_offset": 0, 00:13:05.544 "data_size": 65536 00:13:05.544 }, 00:13:05.544 { 00:13:05.544 "name": "BaseBdev3", 00:13:05.544 "uuid": "9ed45c74-e619-4654-9146-439420bb851d", 00:13:05.544 "is_configured": true, 00:13:05.544 "data_offset": 0, 00:13:05.544 "data_size": 65536 00:13:05.544 }, 00:13:05.544 { 00:13:05.544 "name": "BaseBdev4", 00:13:05.544 "uuid": "9400bd54-50cd-416c-be65-ec3fb101f8e3", 00:13:05.544 "is_configured": true, 00:13:05.544 "data_offset": 0, 00:13:05.544 "data_size": 65536 00:13:05.544 } 00:13:05.544 ] 00:13:05.544 }' 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.544 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.113 [2024-12-05 20:06:07.399074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.113 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.373 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.373 "name": "Existed_Raid", 00:13:06.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.373 "strip_size_kb": 64, 00:13:06.373 "state": "configuring", 00:13:06.373 "raid_level": "concat", 00:13:06.373 "superblock": false, 00:13:06.373 "num_base_bdevs": 4, 00:13:06.373 "num_base_bdevs_discovered": 2, 00:13:06.373 "num_base_bdevs_operational": 4, 00:13:06.373 "base_bdevs_list": [ 00:13:06.373 { 00:13:06.373 "name": null, 00:13:06.373 "uuid": "a630e5b0-4c3e-4a15-bbf0-e143dd329ef6", 00:13:06.373 "is_configured": false, 00:13:06.373 "data_offset": 0, 00:13:06.373 "data_size": 65536 00:13:06.373 }, 00:13:06.373 { 00:13:06.373 "name": null, 00:13:06.373 "uuid": "ea5e8c47-ab1c-410e-ae3b-66928d263776", 00:13:06.373 "is_configured": false, 00:13:06.374 "data_offset": 0, 00:13:06.374 "data_size": 65536 00:13:06.374 }, 00:13:06.374 { 00:13:06.374 "name": "BaseBdev3", 00:13:06.374 "uuid": "9ed45c74-e619-4654-9146-439420bb851d", 00:13:06.374 "is_configured": true, 00:13:06.374 "data_offset": 0, 00:13:06.374 "data_size": 65536 00:13:06.374 }, 00:13:06.374 { 00:13:06.374 "name": "BaseBdev4", 00:13:06.374 "uuid": "9400bd54-50cd-416c-be65-ec3fb101f8e3", 00:13:06.374 "is_configured": true, 00:13:06.374 "data_offset": 0, 00:13:06.374 "data_size": 65536 00:13:06.374 } 00:13:06.374 ] 00:13:06.374 }' 00:13:06.374 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.374 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.634 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.634 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:06.634 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.634 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.634 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.634 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:06.634 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:06.634 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.634 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.634 [2024-12-05 20:06:07.999854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.634 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.634 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:06.634 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.634 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.634 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.634 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.634 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.634 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.634 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.634 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.634 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.634 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.634 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.634 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.634 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.634 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.634 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.634 "name": "Existed_Raid", 00:13:06.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.634 "strip_size_kb": 64, 00:13:06.634 "state": "configuring", 00:13:06.634 "raid_level": "concat", 00:13:06.634 "superblock": false, 00:13:06.634 "num_base_bdevs": 4, 00:13:06.634 "num_base_bdevs_discovered": 3, 00:13:06.634 "num_base_bdevs_operational": 4, 00:13:06.634 "base_bdevs_list": [ 00:13:06.634 { 00:13:06.634 "name": null, 00:13:06.634 "uuid": "a630e5b0-4c3e-4a15-bbf0-e143dd329ef6", 00:13:06.634 "is_configured": false, 00:13:06.634 "data_offset": 0, 00:13:06.634 "data_size": 65536 00:13:06.634 }, 00:13:06.634 { 00:13:06.634 "name": "BaseBdev2", 00:13:06.634 "uuid": "ea5e8c47-ab1c-410e-ae3b-66928d263776", 00:13:06.634 "is_configured": true, 00:13:06.634 "data_offset": 0, 00:13:06.634 "data_size": 65536 00:13:06.634 }, 00:13:06.634 { 00:13:06.634 "name": "BaseBdev3", 00:13:06.634 "uuid": "9ed45c74-e619-4654-9146-439420bb851d", 00:13:06.634 "is_configured": true, 00:13:06.634 "data_offset": 0, 00:13:06.634 "data_size": 65536 00:13:06.634 }, 00:13:06.634 { 00:13:06.634 "name": "BaseBdev4", 00:13:06.634 "uuid": "9400bd54-50cd-416c-be65-ec3fb101f8e3", 00:13:06.634 "is_configured": true, 00:13:06.634 "data_offset": 0, 00:13:06.635 "data_size": 65536 00:13:06.635 } 00:13:06.635 ] 00:13:06.635 }' 00:13:06.635 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.635 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a630e5b0-4c3e-4a15-bbf0-e143dd329ef6 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.203 [2024-12-05 20:06:08.615621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:07.203 [2024-12-05 20:06:08.615680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:07.203 [2024-12-05 20:06:08.615689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:07.203 [2024-12-05 20:06:08.616014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:07.203 [2024-12-05 20:06:08.616190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:07.203 [2024-12-05 20:06:08.616202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:07.203 [2024-12-05 20:06:08.616548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.203 NewBaseBdev 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.203 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.464 [ 00:13:07.465 { 00:13:07.465 "name": "NewBaseBdev", 00:13:07.465 "aliases": [ 00:13:07.465 "a630e5b0-4c3e-4a15-bbf0-e143dd329ef6" 00:13:07.465 ], 00:13:07.465 "product_name": "Malloc disk", 00:13:07.465 "block_size": 512, 00:13:07.465 "num_blocks": 65536, 00:13:07.465 "uuid": "a630e5b0-4c3e-4a15-bbf0-e143dd329ef6", 00:13:07.465 "assigned_rate_limits": { 00:13:07.465 "rw_ios_per_sec": 0, 00:13:07.465 "rw_mbytes_per_sec": 0, 00:13:07.465 "r_mbytes_per_sec": 0, 00:13:07.465 "w_mbytes_per_sec": 0 00:13:07.465 }, 00:13:07.465 "claimed": true, 00:13:07.465 "claim_type": "exclusive_write", 00:13:07.465 "zoned": false, 00:13:07.465 "supported_io_types": { 00:13:07.465 "read": true, 00:13:07.465 "write": true, 00:13:07.465 "unmap": true, 00:13:07.465 "flush": true, 00:13:07.465 "reset": true, 00:13:07.465 "nvme_admin": false, 00:13:07.465 "nvme_io": false, 00:13:07.465 "nvme_io_md": false, 00:13:07.465 "write_zeroes": true, 00:13:07.465 "zcopy": true, 00:13:07.465 "get_zone_info": false, 00:13:07.465 "zone_management": false, 00:13:07.465 "zone_append": false, 00:13:07.465 "compare": false, 00:13:07.465 "compare_and_write": false, 00:13:07.465 "abort": true, 00:13:07.465 "seek_hole": false, 00:13:07.465 "seek_data": false, 00:13:07.465 "copy": true, 00:13:07.465 "nvme_iov_md": false 00:13:07.465 }, 00:13:07.465 "memory_domains": [ 00:13:07.465 { 00:13:07.465 "dma_device_id": "system", 00:13:07.465 "dma_device_type": 1 00:13:07.465 }, 00:13:07.465 { 00:13:07.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.465 "dma_device_type": 2 00:13:07.465 } 00:13:07.465 ], 00:13:07.465 "driver_specific": {} 00:13:07.465 } 00:13:07.465 ] 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.465 "name": "Existed_Raid", 00:13:07.465 "uuid": "a2f26988-eb6a-4afb-b951-d45f33ceed6c", 00:13:07.465 "strip_size_kb": 64, 00:13:07.465 "state": "online", 00:13:07.465 "raid_level": "concat", 00:13:07.465 "superblock": false, 00:13:07.465 "num_base_bdevs": 4, 00:13:07.465 "num_base_bdevs_discovered": 4, 00:13:07.465 "num_base_bdevs_operational": 4, 00:13:07.465 "base_bdevs_list": [ 00:13:07.465 { 00:13:07.465 "name": "NewBaseBdev", 00:13:07.465 "uuid": "a630e5b0-4c3e-4a15-bbf0-e143dd329ef6", 00:13:07.465 "is_configured": true, 00:13:07.465 "data_offset": 0, 00:13:07.465 "data_size": 65536 00:13:07.465 }, 00:13:07.465 { 00:13:07.465 "name": "BaseBdev2", 00:13:07.465 "uuid": "ea5e8c47-ab1c-410e-ae3b-66928d263776", 00:13:07.465 "is_configured": true, 00:13:07.465 "data_offset": 0, 00:13:07.465 "data_size": 65536 00:13:07.465 }, 00:13:07.465 { 00:13:07.465 "name": "BaseBdev3", 00:13:07.465 "uuid": "9ed45c74-e619-4654-9146-439420bb851d", 00:13:07.465 "is_configured": true, 00:13:07.465 "data_offset": 0, 00:13:07.465 "data_size": 65536 00:13:07.465 }, 00:13:07.465 { 00:13:07.465 "name": "BaseBdev4", 00:13:07.465 "uuid": "9400bd54-50cd-416c-be65-ec3fb101f8e3", 00:13:07.465 "is_configured": true, 00:13:07.465 "data_offset": 0, 00:13:07.465 "data_size": 65536 00:13:07.465 } 00:13:07.465 ] 00:13:07.465 }' 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.465 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.725 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:07.725 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:07.725 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:07.725 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:07.725 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:07.725 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:07.725 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:07.725 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:07.725 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.725 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.725 [2024-12-05 20:06:09.107260] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.725 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.725 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:07.725 "name": "Existed_Raid", 00:13:07.725 "aliases": [ 00:13:07.725 "a2f26988-eb6a-4afb-b951-d45f33ceed6c" 00:13:07.725 ], 00:13:07.725 "product_name": "Raid Volume", 00:13:07.725 "block_size": 512, 00:13:07.725 "num_blocks": 262144, 00:13:07.725 "uuid": "a2f26988-eb6a-4afb-b951-d45f33ceed6c", 00:13:07.725 "assigned_rate_limits": { 00:13:07.725 "rw_ios_per_sec": 0, 00:13:07.725 "rw_mbytes_per_sec": 0, 00:13:07.725 "r_mbytes_per_sec": 0, 00:13:07.725 "w_mbytes_per_sec": 0 00:13:07.725 }, 00:13:07.725 "claimed": false, 00:13:07.725 "zoned": false, 00:13:07.725 "supported_io_types": { 00:13:07.725 "read": true, 00:13:07.725 "write": true, 00:13:07.725 "unmap": true, 00:13:07.725 "flush": true, 00:13:07.725 "reset": true, 00:13:07.725 "nvme_admin": false, 00:13:07.725 "nvme_io": false, 00:13:07.725 "nvme_io_md": false, 00:13:07.725 "write_zeroes": true, 00:13:07.725 "zcopy": false, 00:13:07.725 "get_zone_info": false, 00:13:07.725 "zone_management": false, 00:13:07.725 "zone_append": false, 00:13:07.725 "compare": false, 00:13:07.725 "compare_and_write": false, 00:13:07.725 "abort": false, 00:13:07.725 "seek_hole": false, 00:13:07.725 "seek_data": false, 00:13:07.725 "copy": false, 00:13:07.725 "nvme_iov_md": false 00:13:07.725 }, 00:13:07.725 "memory_domains": [ 00:13:07.725 { 00:13:07.725 "dma_device_id": "system", 00:13:07.725 "dma_device_type": 1 00:13:07.725 }, 00:13:07.725 { 00:13:07.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.725 "dma_device_type": 2 00:13:07.725 }, 00:13:07.725 { 00:13:07.725 "dma_device_id": "system", 00:13:07.725 "dma_device_type": 1 00:13:07.725 }, 00:13:07.725 { 00:13:07.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.725 "dma_device_type": 2 00:13:07.725 }, 00:13:07.725 { 00:13:07.725 "dma_device_id": "system", 00:13:07.725 "dma_device_type": 1 00:13:07.725 }, 00:13:07.725 { 00:13:07.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.725 "dma_device_type": 2 00:13:07.725 }, 00:13:07.725 { 00:13:07.725 "dma_device_id": "system", 00:13:07.725 "dma_device_type": 1 00:13:07.725 }, 00:13:07.725 { 00:13:07.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.725 "dma_device_type": 2 00:13:07.725 } 00:13:07.725 ], 00:13:07.725 "driver_specific": { 00:13:07.726 "raid": { 00:13:07.726 "uuid": "a2f26988-eb6a-4afb-b951-d45f33ceed6c", 00:13:07.726 "strip_size_kb": 64, 00:13:07.726 "state": "online", 00:13:07.726 "raid_level": "concat", 00:13:07.726 "superblock": false, 00:13:07.726 "num_base_bdevs": 4, 00:13:07.726 "num_base_bdevs_discovered": 4, 00:13:07.726 "num_base_bdevs_operational": 4, 00:13:07.726 "base_bdevs_list": [ 00:13:07.726 { 00:13:07.726 "name": "NewBaseBdev", 00:13:07.726 "uuid": "a630e5b0-4c3e-4a15-bbf0-e143dd329ef6", 00:13:07.726 "is_configured": true, 00:13:07.726 "data_offset": 0, 00:13:07.726 "data_size": 65536 00:13:07.726 }, 00:13:07.726 { 00:13:07.726 "name": "BaseBdev2", 00:13:07.726 "uuid": "ea5e8c47-ab1c-410e-ae3b-66928d263776", 00:13:07.726 "is_configured": true, 00:13:07.726 "data_offset": 0, 00:13:07.726 "data_size": 65536 00:13:07.726 }, 00:13:07.726 { 00:13:07.726 "name": "BaseBdev3", 00:13:07.726 "uuid": "9ed45c74-e619-4654-9146-439420bb851d", 00:13:07.726 "is_configured": true, 00:13:07.726 "data_offset": 0, 00:13:07.726 "data_size": 65536 00:13:07.726 }, 00:13:07.726 { 00:13:07.726 "name": "BaseBdev4", 00:13:07.726 "uuid": "9400bd54-50cd-416c-be65-ec3fb101f8e3", 00:13:07.726 "is_configured": true, 00:13:07.726 "data_offset": 0, 00:13:07.726 "data_size": 65536 00:13:07.726 } 00:13:07.726 ] 00:13:07.726 } 00:13:07.726 } 00:13:07.726 }' 00:13:07.726 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:07.985 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:07.985 BaseBdev2 00:13:07.985 BaseBdev3 00:13:07.985 BaseBdev4' 00:13:07.985 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.985 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:07.985 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.985 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:07.985 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.985 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.985 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.985 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.986 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.246 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.246 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.246 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:08.246 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.246 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.246 [2024-12-05 20:06:09.446308] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:08.246 [2024-12-05 20:06:09.446397] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:08.246 [2024-12-05 20:06:09.446540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.246 [2024-12-05 20:06:09.446690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.246 [2024-12-05 20:06:09.446756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:08.246 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.246 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71403 00:13:08.246 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71403 ']' 00:13:08.246 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71403 00:13:08.246 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:08.246 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.246 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71403 00:13:08.246 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.246 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.246 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71403' 00:13:08.246 killing process with pid 71403 00:13:08.246 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71403 00:13:08.246 [2024-12-05 20:06:09.496217] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.246 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71403 00:13:08.506 [2024-12-05 20:06:09.939376] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:09.884 00:13:09.884 real 0m12.025s 00:13:09.884 user 0m19.091s 00:13:09.884 sys 0m2.062s 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.884 ************************************ 00:13:09.884 END TEST raid_state_function_test 00:13:09.884 ************************************ 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.884 20:06:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:13:09.884 20:06:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:09.884 20:06:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.884 20:06:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.884 ************************************ 00:13:09.884 START TEST raid_state_function_test_sb 00:13:09.884 ************************************ 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:09.884 Process raid pid: 72083 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72083 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72083' 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72083 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72083 ']' 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.884 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.143 [2024-12-05 20:06:11.382165] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:13:10.143 [2024-12-05 20:06:11.382391] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.143 [2024-12-05 20:06:11.548457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.402 [2024-12-05 20:06:11.680058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.662 [2024-12-05 20:06:11.909122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.662 [2024-12-05 20:06:11.909244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.930 [2024-12-05 20:06:12.290433] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:10.930 [2024-12-05 20:06:12.290498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:10.930 [2024-12-05 20:06:12.290511] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:10.930 [2024-12-05 20:06:12.290522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:10.930 [2024-12-05 20:06:12.290535] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:10.930 [2024-12-05 20:06:12.290545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:10.930 [2024-12-05 20:06:12.290552] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:10.930 [2024-12-05 20:06:12.290562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.930 "name": "Existed_Raid", 00:13:10.930 "uuid": "7b5afd81-d98c-4716-a933-916da6fb05d7", 00:13:10.930 "strip_size_kb": 64, 00:13:10.930 "state": "configuring", 00:13:10.930 "raid_level": "concat", 00:13:10.930 "superblock": true, 00:13:10.930 "num_base_bdevs": 4, 00:13:10.930 "num_base_bdevs_discovered": 0, 00:13:10.930 "num_base_bdevs_operational": 4, 00:13:10.930 "base_bdevs_list": [ 00:13:10.930 { 00:13:10.930 "name": "BaseBdev1", 00:13:10.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.930 "is_configured": false, 00:13:10.930 "data_offset": 0, 00:13:10.930 "data_size": 0 00:13:10.930 }, 00:13:10.930 { 00:13:10.930 "name": "BaseBdev2", 00:13:10.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.930 "is_configured": false, 00:13:10.930 "data_offset": 0, 00:13:10.930 "data_size": 0 00:13:10.930 }, 00:13:10.930 { 00:13:10.930 "name": "BaseBdev3", 00:13:10.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.930 "is_configured": false, 00:13:10.930 "data_offset": 0, 00:13:10.930 "data_size": 0 00:13:10.930 }, 00:13:10.930 { 00:13:10.930 "name": "BaseBdev4", 00:13:10.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.930 "is_configured": false, 00:13:10.930 "data_offset": 0, 00:13:10.930 "data_size": 0 00:13:10.930 } 00:13:10.930 ] 00:13:10.930 }' 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.930 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.519 [2024-12-05 20:06:12.757567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:11.519 [2024-12-05 20:06:12.757672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.519 [2024-12-05 20:06:12.765565] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:11.519 [2024-12-05 20:06:12.765654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:11.519 [2024-12-05 20:06:12.765687] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:11.519 [2024-12-05 20:06:12.765713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:11.519 [2024-12-05 20:06:12.765734] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:11.519 [2024-12-05 20:06:12.765757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:11.519 [2024-12-05 20:06:12.765832] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:11.519 [2024-12-05 20:06:12.765869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.519 [2024-12-05 20:06:12.815376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.519 BaseBdev1 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.519 [ 00:13:11.519 { 00:13:11.519 "name": "BaseBdev1", 00:13:11.519 "aliases": [ 00:13:11.519 "66fc0137-6610-4657-b866-5a03ecb06771" 00:13:11.519 ], 00:13:11.519 "product_name": "Malloc disk", 00:13:11.519 "block_size": 512, 00:13:11.519 "num_blocks": 65536, 00:13:11.519 "uuid": "66fc0137-6610-4657-b866-5a03ecb06771", 00:13:11.519 "assigned_rate_limits": { 00:13:11.519 "rw_ios_per_sec": 0, 00:13:11.519 "rw_mbytes_per_sec": 0, 00:13:11.519 "r_mbytes_per_sec": 0, 00:13:11.519 "w_mbytes_per_sec": 0 00:13:11.519 }, 00:13:11.519 "claimed": true, 00:13:11.519 "claim_type": "exclusive_write", 00:13:11.519 "zoned": false, 00:13:11.519 "supported_io_types": { 00:13:11.519 "read": true, 00:13:11.519 "write": true, 00:13:11.519 "unmap": true, 00:13:11.519 "flush": true, 00:13:11.519 "reset": true, 00:13:11.519 "nvme_admin": false, 00:13:11.519 "nvme_io": false, 00:13:11.519 "nvme_io_md": false, 00:13:11.519 "write_zeroes": true, 00:13:11.519 "zcopy": true, 00:13:11.519 "get_zone_info": false, 00:13:11.519 "zone_management": false, 00:13:11.519 "zone_append": false, 00:13:11.519 "compare": false, 00:13:11.519 "compare_and_write": false, 00:13:11.519 "abort": true, 00:13:11.519 "seek_hole": false, 00:13:11.519 "seek_data": false, 00:13:11.519 "copy": true, 00:13:11.519 "nvme_iov_md": false 00:13:11.519 }, 00:13:11.519 "memory_domains": [ 00:13:11.519 { 00:13:11.519 "dma_device_id": "system", 00:13:11.519 "dma_device_type": 1 00:13:11.519 }, 00:13:11.519 { 00:13:11.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.519 "dma_device_type": 2 00:13:11.519 } 00:13:11.519 ], 00:13:11.519 "driver_specific": {} 00:13:11.519 } 00:13:11.519 ] 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.519 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:11.520 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.520 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.520 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.520 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.520 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.520 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.520 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.520 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.520 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.520 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.520 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.520 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.520 "name": "Existed_Raid", 00:13:11.520 "uuid": "9ac9c8a0-c7b2-4266-beac-26fae4a91781", 00:13:11.520 "strip_size_kb": 64, 00:13:11.520 "state": "configuring", 00:13:11.520 "raid_level": "concat", 00:13:11.520 "superblock": true, 00:13:11.520 "num_base_bdevs": 4, 00:13:11.520 "num_base_bdevs_discovered": 1, 00:13:11.520 "num_base_bdevs_operational": 4, 00:13:11.520 "base_bdevs_list": [ 00:13:11.520 { 00:13:11.520 "name": "BaseBdev1", 00:13:11.520 "uuid": "66fc0137-6610-4657-b866-5a03ecb06771", 00:13:11.520 "is_configured": true, 00:13:11.520 "data_offset": 2048, 00:13:11.520 "data_size": 63488 00:13:11.520 }, 00:13:11.520 { 00:13:11.520 "name": "BaseBdev2", 00:13:11.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.520 "is_configured": false, 00:13:11.520 "data_offset": 0, 00:13:11.520 "data_size": 0 00:13:11.520 }, 00:13:11.520 { 00:13:11.520 "name": "BaseBdev3", 00:13:11.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.520 "is_configured": false, 00:13:11.520 "data_offset": 0, 00:13:11.520 "data_size": 0 00:13:11.520 }, 00:13:11.520 { 00:13:11.520 "name": "BaseBdev4", 00:13:11.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.520 "is_configured": false, 00:13:11.520 "data_offset": 0, 00:13:11.520 "data_size": 0 00:13:11.520 } 00:13:11.520 ] 00:13:11.520 }' 00:13:11.520 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.520 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.088 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:12.088 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.088 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.088 [2024-12-05 20:06:13.318604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:12.088 [2024-12-05 20:06:13.318743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:12.088 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.088 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:12.088 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.088 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.088 [2024-12-05 20:06:13.330643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:12.088 [2024-12-05 20:06:13.332787] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:12.088 [2024-12-05 20:06:13.332876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:12.088 [2024-12-05 20:06:13.332925] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:12.088 [2024-12-05 20:06:13.332957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:12.089 [2024-12-05 20:06:13.332980] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:12.089 [2024-12-05 20:06:13.333005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.089 "name": "Existed_Raid", 00:13:12.089 "uuid": "37541522-edc3-4413-9f65-a1b95e0b6932", 00:13:12.089 "strip_size_kb": 64, 00:13:12.089 "state": "configuring", 00:13:12.089 "raid_level": "concat", 00:13:12.089 "superblock": true, 00:13:12.089 "num_base_bdevs": 4, 00:13:12.089 "num_base_bdevs_discovered": 1, 00:13:12.089 "num_base_bdevs_operational": 4, 00:13:12.089 "base_bdevs_list": [ 00:13:12.089 { 00:13:12.089 "name": "BaseBdev1", 00:13:12.089 "uuid": "66fc0137-6610-4657-b866-5a03ecb06771", 00:13:12.089 "is_configured": true, 00:13:12.089 "data_offset": 2048, 00:13:12.089 "data_size": 63488 00:13:12.089 }, 00:13:12.089 { 00:13:12.089 "name": "BaseBdev2", 00:13:12.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.089 "is_configured": false, 00:13:12.089 "data_offset": 0, 00:13:12.089 "data_size": 0 00:13:12.089 }, 00:13:12.089 { 00:13:12.089 "name": "BaseBdev3", 00:13:12.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.089 "is_configured": false, 00:13:12.089 "data_offset": 0, 00:13:12.089 "data_size": 0 00:13:12.089 }, 00:13:12.089 { 00:13:12.089 "name": "BaseBdev4", 00:13:12.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.089 "is_configured": false, 00:13:12.089 "data_offset": 0, 00:13:12.089 "data_size": 0 00:13:12.089 } 00:13:12.089 ] 00:13:12.089 }' 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.089 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.655 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:12.655 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.655 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.655 [2024-12-05 20:06:13.826787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:12.655 BaseBdev2 00:13:12.655 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.655 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:12.655 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:12.655 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:12.655 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:12.655 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:12.655 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:12.655 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:12.655 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.655 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.655 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.655 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:12.655 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.655 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.655 [ 00:13:12.655 { 00:13:12.655 "name": "BaseBdev2", 00:13:12.655 "aliases": [ 00:13:12.655 "7cef3e38-8cde-45bc-99a6-7f56131bebcf" 00:13:12.655 ], 00:13:12.655 "product_name": "Malloc disk", 00:13:12.655 "block_size": 512, 00:13:12.655 "num_blocks": 65536, 00:13:12.655 "uuid": "7cef3e38-8cde-45bc-99a6-7f56131bebcf", 00:13:12.655 "assigned_rate_limits": { 00:13:12.655 "rw_ios_per_sec": 0, 00:13:12.655 "rw_mbytes_per_sec": 0, 00:13:12.655 "r_mbytes_per_sec": 0, 00:13:12.655 "w_mbytes_per_sec": 0 00:13:12.655 }, 00:13:12.655 "claimed": true, 00:13:12.656 "claim_type": "exclusive_write", 00:13:12.656 "zoned": false, 00:13:12.656 "supported_io_types": { 00:13:12.656 "read": true, 00:13:12.656 "write": true, 00:13:12.656 "unmap": true, 00:13:12.656 "flush": true, 00:13:12.656 "reset": true, 00:13:12.656 "nvme_admin": false, 00:13:12.656 "nvme_io": false, 00:13:12.656 "nvme_io_md": false, 00:13:12.656 "write_zeroes": true, 00:13:12.656 "zcopy": true, 00:13:12.656 "get_zone_info": false, 00:13:12.656 "zone_management": false, 00:13:12.656 "zone_append": false, 00:13:12.656 "compare": false, 00:13:12.656 "compare_and_write": false, 00:13:12.656 "abort": true, 00:13:12.656 "seek_hole": false, 00:13:12.656 "seek_data": false, 00:13:12.656 "copy": true, 00:13:12.656 "nvme_iov_md": false 00:13:12.656 }, 00:13:12.656 "memory_domains": [ 00:13:12.656 { 00:13:12.656 "dma_device_id": "system", 00:13:12.656 "dma_device_type": 1 00:13:12.656 }, 00:13:12.656 { 00:13:12.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.656 "dma_device_type": 2 00:13:12.656 } 00:13:12.656 ], 00:13:12.656 "driver_specific": {} 00:13:12.656 } 00:13:12.656 ] 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.656 "name": "Existed_Raid", 00:13:12.656 "uuid": "37541522-edc3-4413-9f65-a1b95e0b6932", 00:13:12.656 "strip_size_kb": 64, 00:13:12.656 "state": "configuring", 00:13:12.656 "raid_level": "concat", 00:13:12.656 "superblock": true, 00:13:12.656 "num_base_bdevs": 4, 00:13:12.656 "num_base_bdevs_discovered": 2, 00:13:12.656 "num_base_bdevs_operational": 4, 00:13:12.656 "base_bdevs_list": [ 00:13:12.656 { 00:13:12.656 "name": "BaseBdev1", 00:13:12.656 "uuid": "66fc0137-6610-4657-b866-5a03ecb06771", 00:13:12.656 "is_configured": true, 00:13:12.656 "data_offset": 2048, 00:13:12.656 "data_size": 63488 00:13:12.656 }, 00:13:12.656 { 00:13:12.656 "name": "BaseBdev2", 00:13:12.656 "uuid": "7cef3e38-8cde-45bc-99a6-7f56131bebcf", 00:13:12.656 "is_configured": true, 00:13:12.656 "data_offset": 2048, 00:13:12.656 "data_size": 63488 00:13:12.656 }, 00:13:12.656 { 00:13:12.656 "name": "BaseBdev3", 00:13:12.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.656 "is_configured": false, 00:13:12.656 "data_offset": 0, 00:13:12.656 "data_size": 0 00:13:12.656 }, 00:13:12.656 { 00:13:12.656 "name": "BaseBdev4", 00:13:12.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.656 "is_configured": false, 00:13:12.656 "data_offset": 0, 00:13:12.656 "data_size": 0 00:13:12.656 } 00:13:12.656 ] 00:13:12.656 }' 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.656 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.222 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:13.222 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.222 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.222 [2024-12-05 20:06:14.405858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:13.222 BaseBdev3 00:13:13.222 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.222 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:13.222 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:13.222 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:13.222 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:13.222 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:13.222 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:13.222 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:13.222 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.222 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.222 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.222 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:13.222 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.222 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.222 [ 00:13:13.222 { 00:13:13.222 "name": "BaseBdev3", 00:13:13.222 "aliases": [ 00:13:13.222 "fe04bfd1-13eb-4bac-b1b2-eb85c6a108ce" 00:13:13.222 ], 00:13:13.222 "product_name": "Malloc disk", 00:13:13.222 "block_size": 512, 00:13:13.222 "num_blocks": 65536, 00:13:13.222 "uuid": "fe04bfd1-13eb-4bac-b1b2-eb85c6a108ce", 00:13:13.222 "assigned_rate_limits": { 00:13:13.222 "rw_ios_per_sec": 0, 00:13:13.222 "rw_mbytes_per_sec": 0, 00:13:13.222 "r_mbytes_per_sec": 0, 00:13:13.222 "w_mbytes_per_sec": 0 00:13:13.222 }, 00:13:13.222 "claimed": true, 00:13:13.222 "claim_type": "exclusive_write", 00:13:13.222 "zoned": false, 00:13:13.222 "supported_io_types": { 00:13:13.222 "read": true, 00:13:13.222 "write": true, 00:13:13.222 "unmap": true, 00:13:13.222 "flush": true, 00:13:13.222 "reset": true, 00:13:13.222 "nvme_admin": false, 00:13:13.222 "nvme_io": false, 00:13:13.222 "nvme_io_md": false, 00:13:13.222 "write_zeroes": true, 00:13:13.222 "zcopy": true, 00:13:13.222 "get_zone_info": false, 00:13:13.222 "zone_management": false, 00:13:13.222 "zone_append": false, 00:13:13.222 "compare": false, 00:13:13.222 "compare_and_write": false, 00:13:13.222 "abort": true, 00:13:13.222 "seek_hole": false, 00:13:13.222 "seek_data": false, 00:13:13.222 "copy": true, 00:13:13.222 "nvme_iov_md": false 00:13:13.222 }, 00:13:13.222 "memory_domains": [ 00:13:13.222 { 00:13:13.222 "dma_device_id": "system", 00:13:13.222 "dma_device_type": 1 00:13:13.222 }, 00:13:13.222 { 00:13:13.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.223 "dma_device_type": 2 00:13:13.223 } 00:13:13.223 ], 00:13:13.223 "driver_specific": {} 00:13:13.223 } 00:13:13.223 ] 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.223 "name": "Existed_Raid", 00:13:13.223 "uuid": "37541522-edc3-4413-9f65-a1b95e0b6932", 00:13:13.223 "strip_size_kb": 64, 00:13:13.223 "state": "configuring", 00:13:13.223 "raid_level": "concat", 00:13:13.223 "superblock": true, 00:13:13.223 "num_base_bdevs": 4, 00:13:13.223 "num_base_bdevs_discovered": 3, 00:13:13.223 "num_base_bdevs_operational": 4, 00:13:13.223 "base_bdevs_list": [ 00:13:13.223 { 00:13:13.223 "name": "BaseBdev1", 00:13:13.223 "uuid": "66fc0137-6610-4657-b866-5a03ecb06771", 00:13:13.223 "is_configured": true, 00:13:13.223 "data_offset": 2048, 00:13:13.223 "data_size": 63488 00:13:13.223 }, 00:13:13.223 { 00:13:13.223 "name": "BaseBdev2", 00:13:13.223 "uuid": "7cef3e38-8cde-45bc-99a6-7f56131bebcf", 00:13:13.223 "is_configured": true, 00:13:13.223 "data_offset": 2048, 00:13:13.223 "data_size": 63488 00:13:13.223 }, 00:13:13.223 { 00:13:13.223 "name": "BaseBdev3", 00:13:13.223 "uuid": "fe04bfd1-13eb-4bac-b1b2-eb85c6a108ce", 00:13:13.223 "is_configured": true, 00:13:13.223 "data_offset": 2048, 00:13:13.223 "data_size": 63488 00:13:13.223 }, 00:13:13.223 { 00:13:13.223 "name": "BaseBdev4", 00:13:13.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.223 "is_configured": false, 00:13:13.223 "data_offset": 0, 00:13:13.223 "data_size": 0 00:13:13.223 } 00:13:13.223 ] 00:13:13.223 }' 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.223 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.791 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:13.791 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.791 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.791 [2024-12-05 20:06:14.979512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:13.791 [2024-12-05 20:06:14.979836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:13.791 [2024-12-05 20:06:14.979853] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:13.791 [2024-12-05 20:06:14.980203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:13.791 [2024-12-05 20:06:14.980385] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:13.791 [2024-12-05 20:06:14.980488] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:13.791 BaseBdev4 00:13:13.791 [2024-12-05 20:06:14.980719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.791 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.791 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:13.791 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:13.791 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:13.791 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:13.791 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:13.791 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:13.791 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:13.791 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.791 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.791 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.791 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:13.791 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.791 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.791 [ 00:13:13.791 { 00:13:13.791 "name": "BaseBdev4", 00:13:13.791 "aliases": [ 00:13:13.791 "3f58bc71-659b-4aa2-9d66-e9261ecf0271" 00:13:13.791 ], 00:13:13.791 "product_name": "Malloc disk", 00:13:13.791 "block_size": 512, 00:13:13.792 "num_blocks": 65536, 00:13:13.792 "uuid": "3f58bc71-659b-4aa2-9d66-e9261ecf0271", 00:13:13.792 "assigned_rate_limits": { 00:13:13.792 "rw_ios_per_sec": 0, 00:13:13.792 "rw_mbytes_per_sec": 0, 00:13:13.792 "r_mbytes_per_sec": 0, 00:13:13.792 "w_mbytes_per_sec": 0 00:13:13.792 }, 00:13:13.792 "claimed": true, 00:13:13.792 "claim_type": "exclusive_write", 00:13:13.792 "zoned": false, 00:13:13.792 "supported_io_types": { 00:13:13.792 "read": true, 00:13:13.792 "write": true, 00:13:13.792 "unmap": true, 00:13:13.792 "flush": true, 00:13:13.792 "reset": true, 00:13:13.792 "nvme_admin": false, 00:13:13.792 "nvme_io": false, 00:13:13.792 "nvme_io_md": false, 00:13:13.792 "write_zeroes": true, 00:13:13.792 "zcopy": true, 00:13:13.792 "get_zone_info": false, 00:13:13.792 "zone_management": false, 00:13:13.792 "zone_append": false, 00:13:13.792 "compare": false, 00:13:13.792 "compare_and_write": false, 00:13:13.792 "abort": true, 00:13:13.792 "seek_hole": false, 00:13:13.792 "seek_data": false, 00:13:13.792 "copy": true, 00:13:13.792 "nvme_iov_md": false 00:13:13.792 }, 00:13:13.792 "memory_domains": [ 00:13:13.792 { 00:13:13.792 "dma_device_id": "system", 00:13:13.792 "dma_device_type": 1 00:13:13.792 }, 00:13:13.792 { 00:13:13.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.792 "dma_device_type": 2 00:13:13.792 } 00:13:13.792 ], 00:13:13.792 "driver_specific": {} 00:13:13.792 } 00:13:13.792 ] 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.792 "name": "Existed_Raid", 00:13:13.792 "uuid": "37541522-edc3-4413-9f65-a1b95e0b6932", 00:13:13.792 "strip_size_kb": 64, 00:13:13.792 "state": "online", 00:13:13.792 "raid_level": "concat", 00:13:13.792 "superblock": true, 00:13:13.792 "num_base_bdevs": 4, 00:13:13.792 "num_base_bdevs_discovered": 4, 00:13:13.792 "num_base_bdevs_operational": 4, 00:13:13.792 "base_bdevs_list": [ 00:13:13.792 { 00:13:13.792 "name": "BaseBdev1", 00:13:13.792 "uuid": "66fc0137-6610-4657-b866-5a03ecb06771", 00:13:13.792 "is_configured": true, 00:13:13.792 "data_offset": 2048, 00:13:13.792 "data_size": 63488 00:13:13.792 }, 00:13:13.792 { 00:13:13.792 "name": "BaseBdev2", 00:13:13.792 "uuid": "7cef3e38-8cde-45bc-99a6-7f56131bebcf", 00:13:13.792 "is_configured": true, 00:13:13.792 "data_offset": 2048, 00:13:13.792 "data_size": 63488 00:13:13.792 }, 00:13:13.792 { 00:13:13.792 "name": "BaseBdev3", 00:13:13.792 "uuid": "fe04bfd1-13eb-4bac-b1b2-eb85c6a108ce", 00:13:13.792 "is_configured": true, 00:13:13.792 "data_offset": 2048, 00:13:13.792 "data_size": 63488 00:13:13.792 }, 00:13:13.792 { 00:13:13.792 "name": "BaseBdev4", 00:13:13.792 "uuid": "3f58bc71-659b-4aa2-9d66-e9261ecf0271", 00:13:13.792 "is_configured": true, 00:13:13.792 "data_offset": 2048, 00:13:13.792 "data_size": 63488 00:13:13.792 } 00:13:13.792 ] 00:13:13.792 }' 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.792 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.359 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:14.359 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:14.359 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:14.359 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:14.359 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:14.359 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:14.359 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:14.359 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:14.359 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.359 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.359 [2024-12-05 20:06:15.515051] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.359 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.359 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:14.359 "name": "Existed_Raid", 00:13:14.359 "aliases": [ 00:13:14.359 "37541522-edc3-4413-9f65-a1b95e0b6932" 00:13:14.359 ], 00:13:14.359 "product_name": "Raid Volume", 00:13:14.359 "block_size": 512, 00:13:14.359 "num_blocks": 253952, 00:13:14.359 "uuid": "37541522-edc3-4413-9f65-a1b95e0b6932", 00:13:14.359 "assigned_rate_limits": { 00:13:14.359 "rw_ios_per_sec": 0, 00:13:14.359 "rw_mbytes_per_sec": 0, 00:13:14.359 "r_mbytes_per_sec": 0, 00:13:14.359 "w_mbytes_per_sec": 0 00:13:14.359 }, 00:13:14.359 "claimed": false, 00:13:14.359 "zoned": false, 00:13:14.359 "supported_io_types": { 00:13:14.359 "read": true, 00:13:14.359 "write": true, 00:13:14.359 "unmap": true, 00:13:14.359 "flush": true, 00:13:14.359 "reset": true, 00:13:14.359 "nvme_admin": false, 00:13:14.359 "nvme_io": false, 00:13:14.359 "nvme_io_md": false, 00:13:14.359 "write_zeroes": true, 00:13:14.359 "zcopy": false, 00:13:14.359 "get_zone_info": false, 00:13:14.359 "zone_management": false, 00:13:14.359 "zone_append": false, 00:13:14.359 "compare": false, 00:13:14.359 "compare_and_write": false, 00:13:14.359 "abort": false, 00:13:14.359 "seek_hole": false, 00:13:14.359 "seek_data": false, 00:13:14.359 "copy": false, 00:13:14.359 "nvme_iov_md": false 00:13:14.359 }, 00:13:14.359 "memory_domains": [ 00:13:14.359 { 00:13:14.359 "dma_device_id": "system", 00:13:14.359 "dma_device_type": 1 00:13:14.359 }, 00:13:14.359 { 00:13:14.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.359 "dma_device_type": 2 00:13:14.359 }, 00:13:14.359 { 00:13:14.359 "dma_device_id": "system", 00:13:14.359 "dma_device_type": 1 00:13:14.359 }, 00:13:14.359 { 00:13:14.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.359 "dma_device_type": 2 00:13:14.359 }, 00:13:14.359 { 00:13:14.360 "dma_device_id": "system", 00:13:14.360 "dma_device_type": 1 00:13:14.360 }, 00:13:14.360 { 00:13:14.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.360 "dma_device_type": 2 00:13:14.360 }, 00:13:14.360 { 00:13:14.360 "dma_device_id": "system", 00:13:14.360 "dma_device_type": 1 00:13:14.360 }, 00:13:14.360 { 00:13:14.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.360 "dma_device_type": 2 00:13:14.360 } 00:13:14.360 ], 00:13:14.360 "driver_specific": { 00:13:14.360 "raid": { 00:13:14.360 "uuid": "37541522-edc3-4413-9f65-a1b95e0b6932", 00:13:14.360 "strip_size_kb": 64, 00:13:14.360 "state": "online", 00:13:14.360 "raid_level": "concat", 00:13:14.360 "superblock": true, 00:13:14.360 "num_base_bdevs": 4, 00:13:14.360 "num_base_bdevs_discovered": 4, 00:13:14.360 "num_base_bdevs_operational": 4, 00:13:14.360 "base_bdevs_list": [ 00:13:14.360 { 00:13:14.360 "name": "BaseBdev1", 00:13:14.360 "uuid": "66fc0137-6610-4657-b866-5a03ecb06771", 00:13:14.360 "is_configured": true, 00:13:14.360 "data_offset": 2048, 00:13:14.360 "data_size": 63488 00:13:14.360 }, 00:13:14.360 { 00:13:14.360 "name": "BaseBdev2", 00:13:14.360 "uuid": "7cef3e38-8cde-45bc-99a6-7f56131bebcf", 00:13:14.360 "is_configured": true, 00:13:14.360 "data_offset": 2048, 00:13:14.360 "data_size": 63488 00:13:14.360 }, 00:13:14.360 { 00:13:14.360 "name": "BaseBdev3", 00:13:14.360 "uuid": "fe04bfd1-13eb-4bac-b1b2-eb85c6a108ce", 00:13:14.360 "is_configured": true, 00:13:14.360 "data_offset": 2048, 00:13:14.360 "data_size": 63488 00:13:14.360 }, 00:13:14.360 { 00:13:14.360 "name": "BaseBdev4", 00:13:14.360 "uuid": "3f58bc71-659b-4aa2-9d66-e9261ecf0271", 00:13:14.360 "is_configured": true, 00:13:14.360 "data_offset": 2048, 00:13:14.360 "data_size": 63488 00:13:14.360 } 00:13:14.360 ] 00:13:14.360 } 00:13:14.360 } 00:13:14.360 }' 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:14.360 BaseBdev2 00:13:14.360 BaseBdev3 00:13:14.360 BaseBdev4' 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:14.360 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.619 [2024-12-05 20:06:15.842176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:14.619 [2024-12-05 20:06:15.842221] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:14.619 [2024-12-05 20:06:15.842277] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.619 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.619 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.619 "name": "Existed_Raid", 00:13:14.619 "uuid": "37541522-edc3-4413-9f65-a1b95e0b6932", 00:13:14.619 "strip_size_kb": 64, 00:13:14.619 "state": "offline", 00:13:14.619 "raid_level": "concat", 00:13:14.619 "superblock": true, 00:13:14.619 "num_base_bdevs": 4, 00:13:14.619 "num_base_bdevs_discovered": 3, 00:13:14.619 "num_base_bdevs_operational": 3, 00:13:14.619 "base_bdevs_list": [ 00:13:14.619 { 00:13:14.619 "name": null, 00:13:14.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.619 "is_configured": false, 00:13:14.619 "data_offset": 0, 00:13:14.619 "data_size": 63488 00:13:14.619 }, 00:13:14.619 { 00:13:14.619 "name": "BaseBdev2", 00:13:14.619 "uuid": "7cef3e38-8cde-45bc-99a6-7f56131bebcf", 00:13:14.619 "is_configured": true, 00:13:14.619 "data_offset": 2048, 00:13:14.619 "data_size": 63488 00:13:14.619 }, 00:13:14.619 { 00:13:14.619 "name": "BaseBdev3", 00:13:14.619 "uuid": "fe04bfd1-13eb-4bac-b1b2-eb85c6a108ce", 00:13:14.619 "is_configured": true, 00:13:14.619 "data_offset": 2048, 00:13:14.619 "data_size": 63488 00:13:14.619 }, 00:13:14.619 { 00:13:14.619 "name": "BaseBdev4", 00:13:14.619 "uuid": "3f58bc71-659b-4aa2-9d66-e9261ecf0271", 00:13:14.619 "is_configured": true, 00:13:14.619 "data_offset": 2048, 00:13:14.619 "data_size": 63488 00:13:14.619 } 00:13:14.619 ] 00:13:14.619 }' 00:13:14.619 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.619 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.189 [2024-12-05 20:06:16.472480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.189 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.449 [2024-12-05 20:06:16.647440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.449 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.449 [2024-12-05 20:06:16.815774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:15.449 [2024-12-05 20:06:16.815829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:15.709 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.709 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:15.709 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.709 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.709 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:15.709 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.709 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.709 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.709 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:15.709 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:15.709 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:15.709 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:15.709 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:15.709 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:15.709 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.709 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.709 BaseBdev2 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.709 [ 00:13:15.709 { 00:13:15.709 "name": "BaseBdev2", 00:13:15.709 "aliases": [ 00:13:15.709 "69519626-1d0b-4802-af2d-35c09789ea6e" 00:13:15.709 ], 00:13:15.709 "product_name": "Malloc disk", 00:13:15.709 "block_size": 512, 00:13:15.709 "num_blocks": 65536, 00:13:15.709 "uuid": "69519626-1d0b-4802-af2d-35c09789ea6e", 00:13:15.709 "assigned_rate_limits": { 00:13:15.709 "rw_ios_per_sec": 0, 00:13:15.709 "rw_mbytes_per_sec": 0, 00:13:15.709 "r_mbytes_per_sec": 0, 00:13:15.709 "w_mbytes_per_sec": 0 00:13:15.709 }, 00:13:15.709 "claimed": false, 00:13:15.709 "zoned": false, 00:13:15.709 "supported_io_types": { 00:13:15.709 "read": true, 00:13:15.709 "write": true, 00:13:15.709 "unmap": true, 00:13:15.709 "flush": true, 00:13:15.709 "reset": true, 00:13:15.709 "nvme_admin": false, 00:13:15.709 "nvme_io": false, 00:13:15.709 "nvme_io_md": false, 00:13:15.709 "write_zeroes": true, 00:13:15.709 "zcopy": true, 00:13:15.709 "get_zone_info": false, 00:13:15.709 "zone_management": false, 00:13:15.709 "zone_append": false, 00:13:15.709 "compare": false, 00:13:15.709 "compare_and_write": false, 00:13:15.709 "abort": true, 00:13:15.709 "seek_hole": false, 00:13:15.709 "seek_data": false, 00:13:15.709 "copy": true, 00:13:15.709 "nvme_iov_md": false 00:13:15.709 }, 00:13:15.709 "memory_domains": [ 00:13:15.709 { 00:13:15.709 "dma_device_id": "system", 00:13:15.709 "dma_device_type": 1 00:13:15.709 }, 00:13:15.709 { 00:13:15.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.709 "dma_device_type": 2 00:13:15.709 } 00:13:15.709 ], 00:13:15.709 "driver_specific": {} 00:13:15.709 } 00:13:15.709 ] 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.709 BaseBdev3 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:15.709 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.710 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.710 [ 00:13:15.710 { 00:13:15.710 "name": "BaseBdev3", 00:13:15.710 "aliases": [ 00:13:15.710 "2a287bff-f855-4bf8-93e5-3753f43a25e6" 00:13:15.710 ], 00:13:15.710 "product_name": "Malloc disk", 00:13:15.710 "block_size": 512, 00:13:15.710 "num_blocks": 65536, 00:13:15.710 "uuid": "2a287bff-f855-4bf8-93e5-3753f43a25e6", 00:13:15.710 "assigned_rate_limits": { 00:13:15.710 "rw_ios_per_sec": 0, 00:13:15.710 "rw_mbytes_per_sec": 0, 00:13:15.710 "r_mbytes_per_sec": 0, 00:13:15.710 "w_mbytes_per_sec": 0 00:13:15.710 }, 00:13:15.710 "claimed": false, 00:13:15.710 "zoned": false, 00:13:15.710 "supported_io_types": { 00:13:15.710 "read": true, 00:13:15.710 "write": true, 00:13:15.710 "unmap": true, 00:13:15.710 "flush": true, 00:13:15.710 "reset": true, 00:13:15.710 "nvme_admin": false, 00:13:15.710 "nvme_io": false, 00:13:15.710 "nvme_io_md": false, 00:13:15.710 "write_zeroes": true, 00:13:15.710 "zcopy": true, 00:13:15.710 "get_zone_info": false, 00:13:15.710 "zone_management": false, 00:13:15.710 "zone_append": false, 00:13:15.710 "compare": false, 00:13:15.710 "compare_and_write": false, 00:13:15.710 "abort": true, 00:13:15.710 "seek_hole": false, 00:13:15.710 "seek_data": false, 00:13:15.710 "copy": true, 00:13:15.710 "nvme_iov_md": false 00:13:15.710 }, 00:13:15.710 "memory_domains": [ 00:13:15.710 { 00:13:15.710 "dma_device_id": "system", 00:13:15.710 "dma_device_type": 1 00:13:15.710 }, 00:13:15.710 { 00:13:15.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.710 "dma_device_type": 2 00:13:15.710 } 00:13:15.710 ], 00:13:15.710 "driver_specific": {} 00:13:15.710 } 00:13:15.710 ] 00:13:15.710 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.710 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:15.710 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:15.710 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:15.710 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:15.710 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.710 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.970 BaseBdev4 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.970 [ 00:13:15.970 { 00:13:15.970 "name": "BaseBdev4", 00:13:15.970 "aliases": [ 00:13:15.970 "78e44086-3bfd-45a8-ab80-7229eb13fa68" 00:13:15.970 ], 00:13:15.970 "product_name": "Malloc disk", 00:13:15.970 "block_size": 512, 00:13:15.970 "num_blocks": 65536, 00:13:15.970 "uuid": "78e44086-3bfd-45a8-ab80-7229eb13fa68", 00:13:15.970 "assigned_rate_limits": { 00:13:15.970 "rw_ios_per_sec": 0, 00:13:15.970 "rw_mbytes_per_sec": 0, 00:13:15.970 "r_mbytes_per_sec": 0, 00:13:15.970 "w_mbytes_per_sec": 0 00:13:15.970 }, 00:13:15.970 "claimed": false, 00:13:15.970 "zoned": false, 00:13:15.970 "supported_io_types": { 00:13:15.970 "read": true, 00:13:15.970 "write": true, 00:13:15.970 "unmap": true, 00:13:15.970 "flush": true, 00:13:15.970 "reset": true, 00:13:15.970 "nvme_admin": false, 00:13:15.970 "nvme_io": false, 00:13:15.970 "nvme_io_md": false, 00:13:15.970 "write_zeroes": true, 00:13:15.970 "zcopy": true, 00:13:15.970 "get_zone_info": false, 00:13:15.970 "zone_management": false, 00:13:15.970 "zone_append": false, 00:13:15.970 "compare": false, 00:13:15.970 "compare_and_write": false, 00:13:15.970 "abort": true, 00:13:15.970 "seek_hole": false, 00:13:15.970 "seek_data": false, 00:13:15.970 "copy": true, 00:13:15.970 "nvme_iov_md": false 00:13:15.970 }, 00:13:15.970 "memory_domains": [ 00:13:15.970 { 00:13:15.970 "dma_device_id": "system", 00:13:15.970 "dma_device_type": 1 00:13:15.970 }, 00:13:15.970 { 00:13:15.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.970 "dma_device_type": 2 00:13:15.970 } 00:13:15.970 ], 00:13:15.970 "driver_specific": {} 00:13:15.970 } 00:13:15.970 ] 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.970 [2024-12-05 20:06:17.232464] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:15.970 [2024-12-05 20:06:17.232580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:15.970 [2024-12-05 20:06:17.232647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.970 [2024-12-05 20:06:17.234839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:15.970 [2024-12-05 20:06:17.234967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.970 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:15.971 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.971 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.971 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.971 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.971 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.971 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.971 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.971 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.971 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.971 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.971 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.971 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.971 "name": "Existed_Raid", 00:13:15.971 "uuid": "49e9ad6c-d08c-4f9e-ba10-ec500da05996", 00:13:15.971 "strip_size_kb": 64, 00:13:15.971 "state": "configuring", 00:13:15.971 "raid_level": "concat", 00:13:15.971 "superblock": true, 00:13:15.971 "num_base_bdevs": 4, 00:13:15.971 "num_base_bdevs_discovered": 3, 00:13:15.971 "num_base_bdevs_operational": 4, 00:13:15.971 "base_bdevs_list": [ 00:13:15.971 { 00:13:15.971 "name": "BaseBdev1", 00:13:15.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.971 "is_configured": false, 00:13:15.971 "data_offset": 0, 00:13:15.971 "data_size": 0 00:13:15.971 }, 00:13:15.971 { 00:13:15.971 "name": "BaseBdev2", 00:13:15.971 "uuid": "69519626-1d0b-4802-af2d-35c09789ea6e", 00:13:15.971 "is_configured": true, 00:13:15.971 "data_offset": 2048, 00:13:15.971 "data_size": 63488 00:13:15.971 }, 00:13:15.971 { 00:13:15.971 "name": "BaseBdev3", 00:13:15.971 "uuid": "2a287bff-f855-4bf8-93e5-3753f43a25e6", 00:13:15.971 "is_configured": true, 00:13:15.971 "data_offset": 2048, 00:13:15.971 "data_size": 63488 00:13:15.971 }, 00:13:15.971 { 00:13:15.971 "name": "BaseBdev4", 00:13:15.971 "uuid": "78e44086-3bfd-45a8-ab80-7229eb13fa68", 00:13:15.971 "is_configured": true, 00:13:15.971 "data_offset": 2048, 00:13:15.971 "data_size": 63488 00:13:15.971 } 00:13:15.971 ] 00:13:15.971 }' 00:13:15.971 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.971 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.540 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:16.540 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.540 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.540 [2024-12-05 20:06:17.759598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:16.540 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.540 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:16.541 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.541 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.541 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:16.541 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.541 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.541 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.541 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.541 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.541 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.541 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.541 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.541 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.541 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.541 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.541 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.541 "name": "Existed_Raid", 00:13:16.541 "uuid": "49e9ad6c-d08c-4f9e-ba10-ec500da05996", 00:13:16.541 "strip_size_kb": 64, 00:13:16.541 "state": "configuring", 00:13:16.541 "raid_level": "concat", 00:13:16.541 "superblock": true, 00:13:16.541 "num_base_bdevs": 4, 00:13:16.541 "num_base_bdevs_discovered": 2, 00:13:16.541 "num_base_bdevs_operational": 4, 00:13:16.541 "base_bdevs_list": [ 00:13:16.541 { 00:13:16.541 "name": "BaseBdev1", 00:13:16.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.541 "is_configured": false, 00:13:16.541 "data_offset": 0, 00:13:16.541 "data_size": 0 00:13:16.541 }, 00:13:16.541 { 00:13:16.541 "name": null, 00:13:16.541 "uuid": "69519626-1d0b-4802-af2d-35c09789ea6e", 00:13:16.541 "is_configured": false, 00:13:16.541 "data_offset": 0, 00:13:16.541 "data_size": 63488 00:13:16.541 }, 00:13:16.541 { 00:13:16.541 "name": "BaseBdev3", 00:13:16.541 "uuid": "2a287bff-f855-4bf8-93e5-3753f43a25e6", 00:13:16.541 "is_configured": true, 00:13:16.541 "data_offset": 2048, 00:13:16.541 "data_size": 63488 00:13:16.541 }, 00:13:16.541 { 00:13:16.541 "name": "BaseBdev4", 00:13:16.541 "uuid": "78e44086-3bfd-45a8-ab80-7229eb13fa68", 00:13:16.541 "is_configured": true, 00:13:16.541 "data_offset": 2048, 00:13:16.541 "data_size": 63488 00:13:16.541 } 00:13:16.541 ] 00:13:16.541 }' 00:13:16.541 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.541 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.801 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.801 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.801 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.801 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.061 [2024-12-05 20:06:18.311295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.061 BaseBdev1 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.061 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.061 [ 00:13:17.061 { 00:13:17.061 "name": "BaseBdev1", 00:13:17.061 "aliases": [ 00:13:17.061 "27d88744-b9ad-4211-96d7-f90f244184e9" 00:13:17.061 ], 00:13:17.061 "product_name": "Malloc disk", 00:13:17.061 "block_size": 512, 00:13:17.061 "num_blocks": 65536, 00:13:17.061 "uuid": "27d88744-b9ad-4211-96d7-f90f244184e9", 00:13:17.061 "assigned_rate_limits": { 00:13:17.061 "rw_ios_per_sec": 0, 00:13:17.061 "rw_mbytes_per_sec": 0, 00:13:17.061 "r_mbytes_per_sec": 0, 00:13:17.061 "w_mbytes_per_sec": 0 00:13:17.061 }, 00:13:17.061 "claimed": true, 00:13:17.061 "claim_type": "exclusive_write", 00:13:17.061 "zoned": false, 00:13:17.061 "supported_io_types": { 00:13:17.061 "read": true, 00:13:17.061 "write": true, 00:13:17.061 "unmap": true, 00:13:17.061 "flush": true, 00:13:17.061 "reset": true, 00:13:17.061 "nvme_admin": false, 00:13:17.061 "nvme_io": false, 00:13:17.061 "nvme_io_md": false, 00:13:17.061 "write_zeroes": true, 00:13:17.061 "zcopy": true, 00:13:17.061 "get_zone_info": false, 00:13:17.061 "zone_management": false, 00:13:17.061 "zone_append": false, 00:13:17.061 "compare": false, 00:13:17.061 "compare_and_write": false, 00:13:17.061 "abort": true, 00:13:17.061 "seek_hole": false, 00:13:17.061 "seek_data": false, 00:13:17.061 "copy": true, 00:13:17.061 "nvme_iov_md": false 00:13:17.061 }, 00:13:17.061 "memory_domains": [ 00:13:17.061 { 00:13:17.061 "dma_device_id": "system", 00:13:17.061 "dma_device_type": 1 00:13:17.061 }, 00:13:17.061 { 00:13:17.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.062 "dma_device_type": 2 00:13:17.062 } 00:13:17.062 ], 00:13:17.062 "driver_specific": {} 00:13:17.062 } 00:13:17.062 ] 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.062 "name": "Existed_Raid", 00:13:17.062 "uuid": "49e9ad6c-d08c-4f9e-ba10-ec500da05996", 00:13:17.062 "strip_size_kb": 64, 00:13:17.062 "state": "configuring", 00:13:17.062 "raid_level": "concat", 00:13:17.062 "superblock": true, 00:13:17.062 "num_base_bdevs": 4, 00:13:17.062 "num_base_bdevs_discovered": 3, 00:13:17.062 "num_base_bdevs_operational": 4, 00:13:17.062 "base_bdevs_list": [ 00:13:17.062 { 00:13:17.062 "name": "BaseBdev1", 00:13:17.062 "uuid": "27d88744-b9ad-4211-96d7-f90f244184e9", 00:13:17.062 "is_configured": true, 00:13:17.062 "data_offset": 2048, 00:13:17.062 "data_size": 63488 00:13:17.062 }, 00:13:17.062 { 00:13:17.062 "name": null, 00:13:17.062 "uuid": "69519626-1d0b-4802-af2d-35c09789ea6e", 00:13:17.062 "is_configured": false, 00:13:17.062 "data_offset": 0, 00:13:17.062 "data_size": 63488 00:13:17.062 }, 00:13:17.062 { 00:13:17.062 "name": "BaseBdev3", 00:13:17.062 "uuid": "2a287bff-f855-4bf8-93e5-3753f43a25e6", 00:13:17.062 "is_configured": true, 00:13:17.062 "data_offset": 2048, 00:13:17.062 "data_size": 63488 00:13:17.062 }, 00:13:17.062 { 00:13:17.062 "name": "BaseBdev4", 00:13:17.062 "uuid": "78e44086-3bfd-45a8-ab80-7229eb13fa68", 00:13:17.062 "is_configured": true, 00:13:17.062 "data_offset": 2048, 00:13:17.062 "data_size": 63488 00:13:17.062 } 00:13:17.062 ] 00:13:17.062 }' 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.062 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.632 [2024-12-05 20:06:18.898442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.632 "name": "Existed_Raid", 00:13:17.632 "uuid": "49e9ad6c-d08c-4f9e-ba10-ec500da05996", 00:13:17.632 "strip_size_kb": 64, 00:13:17.632 "state": "configuring", 00:13:17.632 "raid_level": "concat", 00:13:17.632 "superblock": true, 00:13:17.632 "num_base_bdevs": 4, 00:13:17.632 "num_base_bdevs_discovered": 2, 00:13:17.632 "num_base_bdevs_operational": 4, 00:13:17.632 "base_bdevs_list": [ 00:13:17.632 { 00:13:17.632 "name": "BaseBdev1", 00:13:17.632 "uuid": "27d88744-b9ad-4211-96d7-f90f244184e9", 00:13:17.632 "is_configured": true, 00:13:17.632 "data_offset": 2048, 00:13:17.632 "data_size": 63488 00:13:17.632 }, 00:13:17.632 { 00:13:17.632 "name": null, 00:13:17.632 "uuid": "69519626-1d0b-4802-af2d-35c09789ea6e", 00:13:17.632 "is_configured": false, 00:13:17.632 "data_offset": 0, 00:13:17.632 "data_size": 63488 00:13:17.632 }, 00:13:17.632 { 00:13:17.632 "name": null, 00:13:17.632 "uuid": "2a287bff-f855-4bf8-93e5-3753f43a25e6", 00:13:17.632 "is_configured": false, 00:13:17.632 "data_offset": 0, 00:13:17.632 "data_size": 63488 00:13:17.632 }, 00:13:17.632 { 00:13:17.632 "name": "BaseBdev4", 00:13:17.632 "uuid": "78e44086-3bfd-45a8-ab80-7229eb13fa68", 00:13:17.632 "is_configured": true, 00:13:17.632 "data_offset": 2048, 00:13:17.632 "data_size": 63488 00:13:17.632 } 00:13:17.632 ] 00:13:17.632 }' 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.632 20:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.203 [2024-12-05 20:06:19.425530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.203 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.203 "name": "Existed_Raid", 00:13:18.203 "uuid": "49e9ad6c-d08c-4f9e-ba10-ec500da05996", 00:13:18.203 "strip_size_kb": 64, 00:13:18.203 "state": "configuring", 00:13:18.203 "raid_level": "concat", 00:13:18.203 "superblock": true, 00:13:18.203 "num_base_bdevs": 4, 00:13:18.203 "num_base_bdevs_discovered": 3, 00:13:18.203 "num_base_bdevs_operational": 4, 00:13:18.203 "base_bdevs_list": [ 00:13:18.203 { 00:13:18.203 "name": "BaseBdev1", 00:13:18.203 "uuid": "27d88744-b9ad-4211-96d7-f90f244184e9", 00:13:18.203 "is_configured": true, 00:13:18.203 "data_offset": 2048, 00:13:18.203 "data_size": 63488 00:13:18.203 }, 00:13:18.203 { 00:13:18.203 "name": null, 00:13:18.203 "uuid": "69519626-1d0b-4802-af2d-35c09789ea6e", 00:13:18.203 "is_configured": false, 00:13:18.203 "data_offset": 0, 00:13:18.203 "data_size": 63488 00:13:18.203 }, 00:13:18.203 { 00:13:18.203 "name": "BaseBdev3", 00:13:18.203 "uuid": "2a287bff-f855-4bf8-93e5-3753f43a25e6", 00:13:18.203 "is_configured": true, 00:13:18.203 "data_offset": 2048, 00:13:18.203 "data_size": 63488 00:13:18.203 }, 00:13:18.203 { 00:13:18.203 "name": "BaseBdev4", 00:13:18.203 "uuid": "78e44086-3bfd-45a8-ab80-7229eb13fa68", 00:13:18.203 "is_configured": true, 00:13:18.203 "data_offset": 2048, 00:13:18.203 "data_size": 63488 00:13:18.203 } 00:13:18.203 ] 00:13:18.204 }' 00:13:18.204 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.204 20:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.772 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.772 20:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.772 20:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.772 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:18.772 20:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.772 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:18.772 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:18.772 20:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.772 20:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.772 [2024-12-05 20:06:19.992658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:18.772 20:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.772 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:18.773 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.773 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.773 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:18.773 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.773 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.773 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.773 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.773 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.773 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.773 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.773 20:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.773 20:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.773 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.773 20:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.773 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.773 "name": "Existed_Raid", 00:13:18.773 "uuid": "49e9ad6c-d08c-4f9e-ba10-ec500da05996", 00:13:18.773 "strip_size_kb": 64, 00:13:18.773 "state": "configuring", 00:13:18.773 "raid_level": "concat", 00:13:18.773 "superblock": true, 00:13:18.773 "num_base_bdevs": 4, 00:13:18.773 "num_base_bdevs_discovered": 2, 00:13:18.773 "num_base_bdevs_operational": 4, 00:13:18.773 "base_bdevs_list": [ 00:13:18.773 { 00:13:18.773 "name": null, 00:13:18.773 "uuid": "27d88744-b9ad-4211-96d7-f90f244184e9", 00:13:18.773 "is_configured": false, 00:13:18.773 "data_offset": 0, 00:13:18.773 "data_size": 63488 00:13:18.773 }, 00:13:18.773 { 00:13:18.773 "name": null, 00:13:18.773 "uuid": "69519626-1d0b-4802-af2d-35c09789ea6e", 00:13:18.773 "is_configured": false, 00:13:18.773 "data_offset": 0, 00:13:18.773 "data_size": 63488 00:13:18.773 }, 00:13:18.773 { 00:13:18.773 "name": "BaseBdev3", 00:13:18.773 "uuid": "2a287bff-f855-4bf8-93e5-3753f43a25e6", 00:13:18.773 "is_configured": true, 00:13:18.773 "data_offset": 2048, 00:13:18.773 "data_size": 63488 00:13:18.773 }, 00:13:18.773 { 00:13:18.773 "name": "BaseBdev4", 00:13:18.773 "uuid": "78e44086-3bfd-45a8-ab80-7229eb13fa68", 00:13:18.773 "is_configured": true, 00:13:18.773 "data_offset": 2048, 00:13:18.773 "data_size": 63488 00:13:18.773 } 00:13:18.773 ] 00:13:18.773 }' 00:13:18.773 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.773 20:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.373 [2024-12-05 20:06:20.609628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.373 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.373 "name": "Existed_Raid", 00:13:19.373 "uuid": "49e9ad6c-d08c-4f9e-ba10-ec500da05996", 00:13:19.373 "strip_size_kb": 64, 00:13:19.373 "state": "configuring", 00:13:19.373 "raid_level": "concat", 00:13:19.373 "superblock": true, 00:13:19.373 "num_base_bdevs": 4, 00:13:19.373 "num_base_bdevs_discovered": 3, 00:13:19.373 "num_base_bdevs_operational": 4, 00:13:19.373 "base_bdevs_list": [ 00:13:19.373 { 00:13:19.373 "name": null, 00:13:19.373 "uuid": "27d88744-b9ad-4211-96d7-f90f244184e9", 00:13:19.373 "is_configured": false, 00:13:19.373 "data_offset": 0, 00:13:19.373 "data_size": 63488 00:13:19.373 }, 00:13:19.373 { 00:13:19.373 "name": "BaseBdev2", 00:13:19.373 "uuid": "69519626-1d0b-4802-af2d-35c09789ea6e", 00:13:19.373 "is_configured": true, 00:13:19.373 "data_offset": 2048, 00:13:19.373 "data_size": 63488 00:13:19.373 }, 00:13:19.373 { 00:13:19.373 "name": "BaseBdev3", 00:13:19.374 "uuid": "2a287bff-f855-4bf8-93e5-3753f43a25e6", 00:13:19.374 "is_configured": true, 00:13:19.374 "data_offset": 2048, 00:13:19.374 "data_size": 63488 00:13:19.374 }, 00:13:19.374 { 00:13:19.374 "name": "BaseBdev4", 00:13:19.374 "uuid": "78e44086-3bfd-45a8-ab80-7229eb13fa68", 00:13:19.374 "is_configured": true, 00:13:19.374 "data_offset": 2048, 00:13:19.374 "data_size": 63488 00:13:19.374 } 00:13:19.374 ] 00:13:19.374 }' 00:13:19.374 20:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.374 20:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 27d88744-b9ad-4211-96d7-f90f244184e9 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.944 [2024-12-05 20:06:21.208080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:19.944 [2024-12-05 20:06:21.208494] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:19.944 [2024-12-05 20:06:21.208513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:19.944 [2024-12-05 20:06:21.208800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:19.944 [2024-12-05 20:06:21.208995] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:19.944 [2024-12-05 20:06:21.209010] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:19.944 NewBaseBdev 00:13:19.944 [2024-12-05 20:06:21.209181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.944 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.944 [ 00:13:19.944 { 00:13:19.944 "name": "NewBaseBdev", 00:13:19.944 "aliases": [ 00:13:19.944 "27d88744-b9ad-4211-96d7-f90f244184e9" 00:13:19.944 ], 00:13:19.944 "product_name": "Malloc disk", 00:13:19.944 "block_size": 512, 00:13:19.944 "num_blocks": 65536, 00:13:19.944 "uuid": "27d88744-b9ad-4211-96d7-f90f244184e9", 00:13:19.944 "assigned_rate_limits": { 00:13:19.944 "rw_ios_per_sec": 0, 00:13:19.944 "rw_mbytes_per_sec": 0, 00:13:19.944 "r_mbytes_per_sec": 0, 00:13:19.944 "w_mbytes_per_sec": 0 00:13:19.944 }, 00:13:19.945 "claimed": true, 00:13:19.945 "claim_type": "exclusive_write", 00:13:19.945 "zoned": false, 00:13:19.945 "supported_io_types": { 00:13:19.945 "read": true, 00:13:19.945 "write": true, 00:13:19.945 "unmap": true, 00:13:19.945 "flush": true, 00:13:19.945 "reset": true, 00:13:19.945 "nvme_admin": false, 00:13:19.945 "nvme_io": false, 00:13:19.945 "nvme_io_md": false, 00:13:19.945 "write_zeroes": true, 00:13:19.945 "zcopy": true, 00:13:19.945 "get_zone_info": false, 00:13:19.945 "zone_management": false, 00:13:19.945 "zone_append": false, 00:13:19.945 "compare": false, 00:13:19.945 "compare_and_write": false, 00:13:19.945 "abort": true, 00:13:19.945 "seek_hole": false, 00:13:19.945 "seek_data": false, 00:13:19.945 "copy": true, 00:13:19.945 "nvme_iov_md": false 00:13:19.945 }, 00:13:19.945 "memory_domains": [ 00:13:19.945 { 00:13:19.945 "dma_device_id": "system", 00:13:19.945 "dma_device_type": 1 00:13:19.945 }, 00:13:19.945 { 00:13:19.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.945 "dma_device_type": 2 00:13:19.945 } 00:13:19.945 ], 00:13:19.945 "driver_specific": {} 00:13:19.945 } 00:13:19.945 ] 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.945 "name": "Existed_Raid", 00:13:19.945 "uuid": "49e9ad6c-d08c-4f9e-ba10-ec500da05996", 00:13:19.945 "strip_size_kb": 64, 00:13:19.945 "state": "online", 00:13:19.945 "raid_level": "concat", 00:13:19.945 "superblock": true, 00:13:19.945 "num_base_bdevs": 4, 00:13:19.945 "num_base_bdevs_discovered": 4, 00:13:19.945 "num_base_bdevs_operational": 4, 00:13:19.945 "base_bdevs_list": [ 00:13:19.945 { 00:13:19.945 "name": "NewBaseBdev", 00:13:19.945 "uuid": "27d88744-b9ad-4211-96d7-f90f244184e9", 00:13:19.945 "is_configured": true, 00:13:19.945 "data_offset": 2048, 00:13:19.945 "data_size": 63488 00:13:19.945 }, 00:13:19.945 { 00:13:19.945 "name": "BaseBdev2", 00:13:19.945 "uuid": "69519626-1d0b-4802-af2d-35c09789ea6e", 00:13:19.945 "is_configured": true, 00:13:19.945 "data_offset": 2048, 00:13:19.945 "data_size": 63488 00:13:19.945 }, 00:13:19.945 { 00:13:19.945 "name": "BaseBdev3", 00:13:19.945 "uuid": "2a287bff-f855-4bf8-93e5-3753f43a25e6", 00:13:19.945 "is_configured": true, 00:13:19.945 "data_offset": 2048, 00:13:19.945 "data_size": 63488 00:13:19.945 }, 00:13:19.945 { 00:13:19.945 "name": "BaseBdev4", 00:13:19.945 "uuid": "78e44086-3bfd-45a8-ab80-7229eb13fa68", 00:13:19.945 "is_configured": true, 00:13:19.945 "data_offset": 2048, 00:13:19.945 "data_size": 63488 00:13:19.945 } 00:13:19.945 ] 00:13:19.945 }' 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.945 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.528 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:20.528 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:20.528 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:20.528 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:20.528 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:20.528 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:20.528 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:20.528 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:20.528 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.528 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.528 [2024-12-05 20:06:21.739637] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.528 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.528 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:20.528 "name": "Existed_Raid", 00:13:20.528 "aliases": [ 00:13:20.528 "49e9ad6c-d08c-4f9e-ba10-ec500da05996" 00:13:20.528 ], 00:13:20.528 "product_name": "Raid Volume", 00:13:20.528 "block_size": 512, 00:13:20.528 "num_blocks": 253952, 00:13:20.528 "uuid": "49e9ad6c-d08c-4f9e-ba10-ec500da05996", 00:13:20.528 "assigned_rate_limits": { 00:13:20.528 "rw_ios_per_sec": 0, 00:13:20.528 "rw_mbytes_per_sec": 0, 00:13:20.528 "r_mbytes_per_sec": 0, 00:13:20.528 "w_mbytes_per_sec": 0 00:13:20.528 }, 00:13:20.528 "claimed": false, 00:13:20.528 "zoned": false, 00:13:20.528 "supported_io_types": { 00:13:20.528 "read": true, 00:13:20.528 "write": true, 00:13:20.528 "unmap": true, 00:13:20.528 "flush": true, 00:13:20.528 "reset": true, 00:13:20.528 "nvme_admin": false, 00:13:20.528 "nvme_io": false, 00:13:20.528 "nvme_io_md": false, 00:13:20.528 "write_zeroes": true, 00:13:20.528 "zcopy": false, 00:13:20.528 "get_zone_info": false, 00:13:20.528 "zone_management": false, 00:13:20.528 "zone_append": false, 00:13:20.528 "compare": false, 00:13:20.528 "compare_and_write": false, 00:13:20.528 "abort": false, 00:13:20.528 "seek_hole": false, 00:13:20.528 "seek_data": false, 00:13:20.528 "copy": false, 00:13:20.528 "nvme_iov_md": false 00:13:20.528 }, 00:13:20.528 "memory_domains": [ 00:13:20.528 { 00:13:20.528 "dma_device_id": "system", 00:13:20.528 "dma_device_type": 1 00:13:20.528 }, 00:13:20.528 { 00:13:20.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.528 "dma_device_type": 2 00:13:20.528 }, 00:13:20.528 { 00:13:20.528 "dma_device_id": "system", 00:13:20.528 "dma_device_type": 1 00:13:20.528 }, 00:13:20.528 { 00:13:20.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.528 "dma_device_type": 2 00:13:20.528 }, 00:13:20.528 { 00:13:20.528 "dma_device_id": "system", 00:13:20.528 "dma_device_type": 1 00:13:20.528 }, 00:13:20.528 { 00:13:20.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.528 "dma_device_type": 2 00:13:20.528 }, 00:13:20.528 { 00:13:20.528 "dma_device_id": "system", 00:13:20.528 "dma_device_type": 1 00:13:20.528 }, 00:13:20.528 { 00:13:20.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.528 "dma_device_type": 2 00:13:20.528 } 00:13:20.528 ], 00:13:20.528 "driver_specific": { 00:13:20.528 "raid": { 00:13:20.528 "uuid": "49e9ad6c-d08c-4f9e-ba10-ec500da05996", 00:13:20.528 "strip_size_kb": 64, 00:13:20.528 "state": "online", 00:13:20.528 "raid_level": "concat", 00:13:20.528 "superblock": true, 00:13:20.528 "num_base_bdevs": 4, 00:13:20.528 "num_base_bdevs_discovered": 4, 00:13:20.528 "num_base_bdevs_operational": 4, 00:13:20.528 "base_bdevs_list": [ 00:13:20.528 { 00:13:20.528 "name": "NewBaseBdev", 00:13:20.528 "uuid": "27d88744-b9ad-4211-96d7-f90f244184e9", 00:13:20.528 "is_configured": true, 00:13:20.528 "data_offset": 2048, 00:13:20.528 "data_size": 63488 00:13:20.528 }, 00:13:20.528 { 00:13:20.528 "name": "BaseBdev2", 00:13:20.528 "uuid": "69519626-1d0b-4802-af2d-35c09789ea6e", 00:13:20.528 "is_configured": true, 00:13:20.528 "data_offset": 2048, 00:13:20.528 "data_size": 63488 00:13:20.528 }, 00:13:20.528 { 00:13:20.528 "name": "BaseBdev3", 00:13:20.528 "uuid": "2a287bff-f855-4bf8-93e5-3753f43a25e6", 00:13:20.528 "is_configured": true, 00:13:20.528 "data_offset": 2048, 00:13:20.528 "data_size": 63488 00:13:20.528 }, 00:13:20.528 { 00:13:20.529 "name": "BaseBdev4", 00:13:20.529 "uuid": "78e44086-3bfd-45a8-ab80-7229eb13fa68", 00:13:20.529 "is_configured": true, 00:13:20.529 "data_offset": 2048, 00:13:20.529 "data_size": 63488 00:13:20.529 } 00:13:20.529 ] 00:13:20.529 } 00:13:20.529 } 00:13:20.529 }' 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:20.529 BaseBdev2 00:13:20.529 BaseBdev3 00:13:20.529 BaseBdev4' 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.529 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.787 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.788 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.788 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.788 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.788 20:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:20.788 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.788 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.788 20:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.788 [2024-12-05 20:06:22.070684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:20.788 [2024-12-05 20:06:22.070763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:20.788 [2024-12-05 20:06:22.070882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.788 [2024-12-05 20:06:22.070999] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:20.788 [2024-12-05 20:06:22.071045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72083 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72083 ']' 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72083 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72083 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72083' 00:13:20.788 killing process with pid 72083 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72083 00:13:20.788 [2024-12-05 20:06:22.109760] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:20.788 20:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72083 00:13:21.355 [2024-12-05 20:06:22.523462] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:22.732 20:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:22.732 00:13:22.732 real 0m12.452s 00:13:22.732 user 0m19.826s 00:13:22.732 sys 0m2.219s 00:13:22.732 20:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.732 ************************************ 00:13:22.732 END TEST raid_state_function_test_sb 00:13:22.732 ************************************ 00:13:22.732 20:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.732 20:06:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:13:22.732 20:06:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:22.732 20:06:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.732 20:06:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:22.732 ************************************ 00:13:22.732 START TEST raid_superblock_test 00:13:22.732 ************************************ 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:22.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72759 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72759 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72759 ']' 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.732 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:22.732 [2024-12-05 20:06:23.890402] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:13:22.732 [2024-12-05 20:06:23.890617] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72759 ] 00:13:22.732 [2024-12-05 20:06:24.073845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.991 [2024-12-05 20:06:24.194854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.991 [2024-12-05 20:06:24.404624] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.991 [2024-12-05 20:06:24.404767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.562 malloc1 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.562 [2024-12-05 20:06:24.817460] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:23.562 [2024-12-05 20:06:24.817537] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.562 [2024-12-05 20:06:24.817562] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:23.562 [2024-12-05 20:06:24.817571] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.562 [2024-12-05 20:06:24.819939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.562 [2024-12-05 20:06:24.819981] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:23.562 pt1 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.562 malloc2 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.562 [2024-12-05 20:06:24.880537] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:23.562 [2024-12-05 20:06:24.880681] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.562 [2024-12-05 20:06:24.880739] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:23.562 [2024-12-05 20:06:24.880798] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.562 [2024-12-05 20:06:24.883322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.562 [2024-12-05 20:06:24.883426] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:23.562 pt2 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.562 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.563 malloc3 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.563 [2024-12-05 20:06:24.950243] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:23.563 [2024-12-05 20:06:24.950369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.563 [2024-12-05 20:06:24.950433] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:23.563 [2024-12-05 20:06:24.950474] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.563 [2024-12-05 20:06:24.952725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.563 [2024-12-05 20:06:24.952817] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:23.563 pt3 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.563 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.823 malloc4 00:13:23.823 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.823 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:23.823 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.823 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.823 [2024-12-05 20:06:25.008875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:23.823 [2024-12-05 20:06:25.009027] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.823 [2024-12-05 20:06:25.009113] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:23.823 [2024-12-05 20:06:25.009165] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.823 [2024-12-05 20:06:25.011558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.823 [2024-12-05 20:06:25.011647] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:23.823 pt4 00:13:23.823 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.823 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:23.823 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:23.823 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:23.823 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.823 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.823 [2024-12-05 20:06:25.020925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:23.823 [2024-12-05 20:06:25.023084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:23.823 [2024-12-05 20:06:25.023252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:23.823 [2024-12-05 20:06:25.023367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:23.823 [2024-12-05 20:06:25.023633] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:23.823 [2024-12-05 20:06:25.023691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:23.823 [2024-12-05 20:06:25.024060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:23.823 [2024-12-05 20:06:25.024334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:23.823 [2024-12-05 20:06:25.024397] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:23.823 [2024-12-05 20:06:25.024657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.823 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.823 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:23.823 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.823 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.823 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:23.823 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.824 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.824 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.824 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.824 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.824 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.824 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.824 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.824 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.824 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.824 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.824 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.824 "name": "raid_bdev1", 00:13:23.824 "uuid": "72e7dd6f-7322-4e58-84c7-67c20e64de81", 00:13:23.824 "strip_size_kb": 64, 00:13:23.824 "state": "online", 00:13:23.824 "raid_level": "concat", 00:13:23.824 "superblock": true, 00:13:23.824 "num_base_bdevs": 4, 00:13:23.824 "num_base_bdevs_discovered": 4, 00:13:23.824 "num_base_bdevs_operational": 4, 00:13:23.824 "base_bdevs_list": [ 00:13:23.824 { 00:13:23.824 "name": "pt1", 00:13:23.824 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:23.824 "is_configured": true, 00:13:23.824 "data_offset": 2048, 00:13:23.824 "data_size": 63488 00:13:23.824 }, 00:13:23.824 { 00:13:23.824 "name": "pt2", 00:13:23.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:23.824 "is_configured": true, 00:13:23.824 "data_offset": 2048, 00:13:23.824 "data_size": 63488 00:13:23.824 }, 00:13:23.824 { 00:13:23.824 "name": "pt3", 00:13:23.824 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:23.824 "is_configured": true, 00:13:23.824 "data_offset": 2048, 00:13:23.824 "data_size": 63488 00:13:23.824 }, 00:13:23.824 { 00:13:23.824 "name": "pt4", 00:13:23.824 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:23.824 "is_configured": true, 00:13:23.824 "data_offset": 2048, 00:13:23.824 "data_size": 63488 00:13:23.824 } 00:13:23.824 ] 00:13:23.824 }' 00:13:23.824 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.824 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.082 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:24.082 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:24.082 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:24.082 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:24.082 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:24.082 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:24.082 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:24.082 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.082 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:24.082 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.082 [2024-12-05 20:06:25.488409] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.082 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.341 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:24.341 "name": "raid_bdev1", 00:13:24.341 "aliases": [ 00:13:24.341 "72e7dd6f-7322-4e58-84c7-67c20e64de81" 00:13:24.341 ], 00:13:24.341 "product_name": "Raid Volume", 00:13:24.341 "block_size": 512, 00:13:24.341 "num_blocks": 253952, 00:13:24.341 "uuid": "72e7dd6f-7322-4e58-84c7-67c20e64de81", 00:13:24.341 "assigned_rate_limits": { 00:13:24.341 "rw_ios_per_sec": 0, 00:13:24.341 "rw_mbytes_per_sec": 0, 00:13:24.341 "r_mbytes_per_sec": 0, 00:13:24.341 "w_mbytes_per_sec": 0 00:13:24.341 }, 00:13:24.341 "claimed": false, 00:13:24.341 "zoned": false, 00:13:24.341 "supported_io_types": { 00:13:24.341 "read": true, 00:13:24.341 "write": true, 00:13:24.341 "unmap": true, 00:13:24.341 "flush": true, 00:13:24.341 "reset": true, 00:13:24.341 "nvme_admin": false, 00:13:24.341 "nvme_io": false, 00:13:24.341 "nvme_io_md": false, 00:13:24.341 "write_zeroes": true, 00:13:24.341 "zcopy": false, 00:13:24.341 "get_zone_info": false, 00:13:24.341 "zone_management": false, 00:13:24.341 "zone_append": false, 00:13:24.341 "compare": false, 00:13:24.341 "compare_and_write": false, 00:13:24.341 "abort": false, 00:13:24.341 "seek_hole": false, 00:13:24.341 "seek_data": false, 00:13:24.341 "copy": false, 00:13:24.341 "nvme_iov_md": false 00:13:24.341 }, 00:13:24.341 "memory_domains": [ 00:13:24.341 { 00:13:24.341 "dma_device_id": "system", 00:13:24.341 "dma_device_type": 1 00:13:24.341 }, 00:13:24.341 { 00:13:24.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.341 "dma_device_type": 2 00:13:24.341 }, 00:13:24.341 { 00:13:24.341 "dma_device_id": "system", 00:13:24.341 "dma_device_type": 1 00:13:24.341 }, 00:13:24.341 { 00:13:24.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.341 "dma_device_type": 2 00:13:24.341 }, 00:13:24.341 { 00:13:24.341 "dma_device_id": "system", 00:13:24.341 "dma_device_type": 1 00:13:24.341 }, 00:13:24.341 { 00:13:24.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.341 "dma_device_type": 2 00:13:24.341 }, 00:13:24.341 { 00:13:24.341 "dma_device_id": "system", 00:13:24.341 "dma_device_type": 1 00:13:24.341 }, 00:13:24.341 { 00:13:24.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.341 "dma_device_type": 2 00:13:24.341 } 00:13:24.341 ], 00:13:24.341 "driver_specific": { 00:13:24.341 "raid": { 00:13:24.341 "uuid": "72e7dd6f-7322-4e58-84c7-67c20e64de81", 00:13:24.341 "strip_size_kb": 64, 00:13:24.341 "state": "online", 00:13:24.341 "raid_level": "concat", 00:13:24.341 "superblock": true, 00:13:24.341 "num_base_bdevs": 4, 00:13:24.341 "num_base_bdevs_discovered": 4, 00:13:24.341 "num_base_bdevs_operational": 4, 00:13:24.341 "base_bdevs_list": [ 00:13:24.341 { 00:13:24.341 "name": "pt1", 00:13:24.341 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:24.341 "is_configured": true, 00:13:24.341 "data_offset": 2048, 00:13:24.341 "data_size": 63488 00:13:24.341 }, 00:13:24.341 { 00:13:24.341 "name": "pt2", 00:13:24.341 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.341 "is_configured": true, 00:13:24.341 "data_offset": 2048, 00:13:24.341 "data_size": 63488 00:13:24.341 }, 00:13:24.341 { 00:13:24.341 "name": "pt3", 00:13:24.341 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:24.341 "is_configured": true, 00:13:24.341 "data_offset": 2048, 00:13:24.341 "data_size": 63488 00:13:24.341 }, 00:13:24.341 { 00:13:24.341 "name": "pt4", 00:13:24.341 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:24.341 "is_configured": true, 00:13:24.341 "data_offset": 2048, 00:13:24.341 "data_size": 63488 00:13:24.341 } 00:13:24.341 ] 00:13:24.341 } 00:13:24.341 } 00:13:24.341 }' 00:13:24.341 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:24.341 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:24.341 pt2 00:13:24.341 pt3 00:13:24.341 pt4' 00:13:24.341 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.341 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:24.341 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.341 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:24.341 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.341 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.341 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.341 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.341 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.341 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.341 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.341 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.342 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:24.606 [2024-12-05 20:06:25.819833] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=72e7dd6f-7322-4e58-84c7-67c20e64de81 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 72e7dd6f-7322-4e58-84c7-67c20e64de81 ']' 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.606 [2024-12-05 20:06:25.867449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.606 [2024-12-05 20:06:25.867537] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.606 [2024-12-05 20:06:25.867685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.606 [2024-12-05 20:06:25.867814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.606 [2024-12-05 20:06:25.867870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.606 20:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.606 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.606 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:24.606 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:24.606 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:24.606 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:24.606 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:24.606 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.606 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:24.606 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.606 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:24.606 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.606 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.873 [2024-12-05 20:06:26.039206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:24.873 [2024-12-05 20:06:26.041358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:24.873 [2024-12-05 20:06:26.041416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:24.873 [2024-12-05 20:06:26.041454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:24.873 [2024-12-05 20:06:26.041522] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:24.873 [2024-12-05 20:06:26.041577] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:24.873 [2024-12-05 20:06:26.041599] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:24.873 [2024-12-05 20:06:26.041620] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:24.873 [2024-12-05 20:06:26.041635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.873 [2024-12-05 20:06:26.041647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:24.873 request: 00:13:24.873 { 00:13:24.873 "name": "raid_bdev1", 00:13:24.873 "raid_level": "concat", 00:13:24.873 "base_bdevs": [ 00:13:24.873 "malloc1", 00:13:24.873 "malloc2", 00:13:24.873 "malloc3", 00:13:24.873 "malloc4" 00:13:24.873 ], 00:13:24.873 "strip_size_kb": 64, 00:13:24.873 "superblock": false, 00:13:24.873 "method": "bdev_raid_create", 00:13:24.873 "req_id": 1 00:13:24.873 } 00:13:24.873 Got JSON-RPC error response 00:13:24.873 response: 00:13:24.873 { 00:13:24.873 "code": -17, 00:13:24.873 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:24.873 } 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.873 [2024-12-05 20:06:26.107041] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:24.873 [2024-12-05 20:06:26.107111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.873 [2024-12-05 20:06:26.107134] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:24.873 [2024-12-05 20:06:26.107147] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.873 [2024-12-05 20:06:26.109711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.873 [2024-12-05 20:06:26.109763] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:24.873 [2024-12-05 20:06:26.109873] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:24.873 [2024-12-05 20:06:26.109985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:24.873 pt1 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.873 "name": "raid_bdev1", 00:13:24.873 "uuid": "72e7dd6f-7322-4e58-84c7-67c20e64de81", 00:13:24.873 "strip_size_kb": 64, 00:13:24.873 "state": "configuring", 00:13:24.873 "raid_level": "concat", 00:13:24.873 "superblock": true, 00:13:24.873 "num_base_bdevs": 4, 00:13:24.873 "num_base_bdevs_discovered": 1, 00:13:24.873 "num_base_bdevs_operational": 4, 00:13:24.873 "base_bdevs_list": [ 00:13:24.873 { 00:13:24.873 "name": "pt1", 00:13:24.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:24.873 "is_configured": true, 00:13:24.873 "data_offset": 2048, 00:13:24.873 "data_size": 63488 00:13:24.873 }, 00:13:24.873 { 00:13:24.873 "name": null, 00:13:24.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.873 "is_configured": false, 00:13:24.873 "data_offset": 2048, 00:13:24.873 "data_size": 63488 00:13:24.873 }, 00:13:24.873 { 00:13:24.873 "name": null, 00:13:24.873 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:24.873 "is_configured": false, 00:13:24.873 "data_offset": 2048, 00:13:24.873 "data_size": 63488 00:13:24.873 }, 00:13:24.873 { 00:13:24.873 "name": null, 00:13:24.873 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:24.873 "is_configured": false, 00:13:24.873 "data_offset": 2048, 00:13:24.873 "data_size": 63488 00:13:24.873 } 00:13:24.873 ] 00:13:24.873 }' 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.873 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.133 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:25.133 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:25.133 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.133 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.133 [2024-12-05 20:06:26.526354] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:25.133 [2024-12-05 20:06:26.526441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.133 [2024-12-05 20:06:26.526461] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:25.133 [2024-12-05 20:06:26.526473] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.133 [2024-12-05 20:06:26.526935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.133 [2024-12-05 20:06:26.526957] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:25.133 [2024-12-05 20:06:26.527043] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:25.133 [2024-12-05 20:06:26.527069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:25.133 pt2 00:13:25.133 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.133 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:25.133 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.133 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.133 [2024-12-05 20:06:26.538322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:25.133 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.133 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:25.133 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.133 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.133 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:25.133 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.133 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.133 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.133 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.134 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.134 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.134 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.134 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.134 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.134 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.394 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.394 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.394 "name": "raid_bdev1", 00:13:25.394 "uuid": "72e7dd6f-7322-4e58-84c7-67c20e64de81", 00:13:25.394 "strip_size_kb": 64, 00:13:25.394 "state": "configuring", 00:13:25.394 "raid_level": "concat", 00:13:25.394 "superblock": true, 00:13:25.394 "num_base_bdevs": 4, 00:13:25.394 "num_base_bdevs_discovered": 1, 00:13:25.394 "num_base_bdevs_operational": 4, 00:13:25.394 "base_bdevs_list": [ 00:13:25.394 { 00:13:25.394 "name": "pt1", 00:13:25.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:25.394 "is_configured": true, 00:13:25.394 "data_offset": 2048, 00:13:25.394 "data_size": 63488 00:13:25.394 }, 00:13:25.394 { 00:13:25.394 "name": null, 00:13:25.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:25.394 "is_configured": false, 00:13:25.394 "data_offset": 0, 00:13:25.394 "data_size": 63488 00:13:25.394 }, 00:13:25.394 { 00:13:25.394 "name": null, 00:13:25.394 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:25.394 "is_configured": false, 00:13:25.394 "data_offset": 2048, 00:13:25.394 "data_size": 63488 00:13:25.394 }, 00:13:25.394 { 00:13:25.394 "name": null, 00:13:25.394 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:25.394 "is_configured": false, 00:13:25.394 "data_offset": 2048, 00:13:25.394 "data_size": 63488 00:13:25.394 } 00:13:25.394 ] 00:13:25.394 }' 00:13:25.394 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.394 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.655 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:25.655 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:25.655 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:25.655 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.655 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.655 [2024-12-05 20:06:26.949649] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:25.655 [2024-12-05 20:06:26.949803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.655 [2024-12-05 20:06:26.949922] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:25.655 [2024-12-05 20:06:26.949974] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.655 [2024-12-05 20:06:26.950547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.655 [2024-12-05 20:06:26.950624] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:25.655 [2024-12-05 20:06:26.950767] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:25.655 [2024-12-05 20:06:26.950835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:25.655 pt2 00:13:25.655 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.655 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:25.655 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:25.655 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:25.655 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.655 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.655 [2024-12-05 20:06:26.961589] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:25.655 [2024-12-05 20:06:26.961683] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.655 [2024-12-05 20:06:26.961744] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:25.655 [2024-12-05 20:06:26.961768] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.655 [2024-12-05 20:06:26.962251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.655 [2024-12-05 20:06:26.962280] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:25.655 [2024-12-05 20:06:26.962360] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:25.655 [2024-12-05 20:06:26.962389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:25.655 pt3 00:13:25.655 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.655 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.656 [2024-12-05 20:06:26.973556] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:25.656 [2024-12-05 20:06:26.973605] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.656 [2024-12-05 20:06:26.973625] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:25.656 [2024-12-05 20:06:26.973636] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.656 [2024-12-05 20:06:26.974096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.656 [2024-12-05 20:06:26.974116] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:25.656 [2024-12-05 20:06:26.974197] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:25.656 [2024-12-05 20:06:26.974223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:25.656 [2024-12-05 20:06:26.974395] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:25.656 [2024-12-05 20:06:26.974407] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:25.656 [2024-12-05 20:06:26.974714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:25.656 [2024-12-05 20:06:26.974890] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:25.656 [2024-12-05 20:06:26.974905] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:25.656 [2024-12-05 20:06:26.975074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.656 pt4 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.656 20:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.656 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.656 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.656 "name": "raid_bdev1", 00:13:25.656 "uuid": "72e7dd6f-7322-4e58-84c7-67c20e64de81", 00:13:25.656 "strip_size_kb": 64, 00:13:25.656 "state": "online", 00:13:25.656 "raid_level": "concat", 00:13:25.656 "superblock": true, 00:13:25.656 "num_base_bdevs": 4, 00:13:25.656 "num_base_bdevs_discovered": 4, 00:13:25.656 "num_base_bdevs_operational": 4, 00:13:25.656 "base_bdevs_list": [ 00:13:25.656 { 00:13:25.656 "name": "pt1", 00:13:25.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:25.656 "is_configured": true, 00:13:25.656 "data_offset": 2048, 00:13:25.656 "data_size": 63488 00:13:25.656 }, 00:13:25.656 { 00:13:25.656 "name": "pt2", 00:13:25.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:25.656 "is_configured": true, 00:13:25.656 "data_offset": 2048, 00:13:25.656 "data_size": 63488 00:13:25.656 }, 00:13:25.656 { 00:13:25.656 "name": "pt3", 00:13:25.656 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:25.656 "is_configured": true, 00:13:25.656 "data_offset": 2048, 00:13:25.656 "data_size": 63488 00:13:25.656 }, 00:13:25.656 { 00:13:25.656 "name": "pt4", 00:13:25.656 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:25.656 "is_configured": true, 00:13:25.656 "data_offset": 2048, 00:13:25.656 "data_size": 63488 00:13:25.656 } 00:13:25.656 ] 00:13:25.656 }' 00:13:25.656 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.656 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.226 [2024-12-05 20:06:27.425263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:26.226 "name": "raid_bdev1", 00:13:26.226 "aliases": [ 00:13:26.226 "72e7dd6f-7322-4e58-84c7-67c20e64de81" 00:13:26.226 ], 00:13:26.226 "product_name": "Raid Volume", 00:13:26.226 "block_size": 512, 00:13:26.226 "num_blocks": 253952, 00:13:26.226 "uuid": "72e7dd6f-7322-4e58-84c7-67c20e64de81", 00:13:26.226 "assigned_rate_limits": { 00:13:26.226 "rw_ios_per_sec": 0, 00:13:26.226 "rw_mbytes_per_sec": 0, 00:13:26.226 "r_mbytes_per_sec": 0, 00:13:26.226 "w_mbytes_per_sec": 0 00:13:26.226 }, 00:13:26.226 "claimed": false, 00:13:26.226 "zoned": false, 00:13:26.226 "supported_io_types": { 00:13:26.226 "read": true, 00:13:26.226 "write": true, 00:13:26.226 "unmap": true, 00:13:26.226 "flush": true, 00:13:26.226 "reset": true, 00:13:26.226 "nvme_admin": false, 00:13:26.226 "nvme_io": false, 00:13:26.226 "nvme_io_md": false, 00:13:26.226 "write_zeroes": true, 00:13:26.226 "zcopy": false, 00:13:26.226 "get_zone_info": false, 00:13:26.226 "zone_management": false, 00:13:26.226 "zone_append": false, 00:13:26.226 "compare": false, 00:13:26.226 "compare_and_write": false, 00:13:26.226 "abort": false, 00:13:26.226 "seek_hole": false, 00:13:26.226 "seek_data": false, 00:13:26.226 "copy": false, 00:13:26.226 "nvme_iov_md": false 00:13:26.226 }, 00:13:26.226 "memory_domains": [ 00:13:26.226 { 00:13:26.226 "dma_device_id": "system", 00:13:26.226 "dma_device_type": 1 00:13:26.226 }, 00:13:26.226 { 00:13:26.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.226 "dma_device_type": 2 00:13:26.226 }, 00:13:26.226 { 00:13:26.226 "dma_device_id": "system", 00:13:26.226 "dma_device_type": 1 00:13:26.226 }, 00:13:26.226 { 00:13:26.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.226 "dma_device_type": 2 00:13:26.226 }, 00:13:26.226 { 00:13:26.226 "dma_device_id": "system", 00:13:26.226 "dma_device_type": 1 00:13:26.226 }, 00:13:26.226 { 00:13:26.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.226 "dma_device_type": 2 00:13:26.226 }, 00:13:26.226 { 00:13:26.226 "dma_device_id": "system", 00:13:26.226 "dma_device_type": 1 00:13:26.226 }, 00:13:26.226 { 00:13:26.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.226 "dma_device_type": 2 00:13:26.226 } 00:13:26.226 ], 00:13:26.226 "driver_specific": { 00:13:26.226 "raid": { 00:13:26.226 "uuid": "72e7dd6f-7322-4e58-84c7-67c20e64de81", 00:13:26.226 "strip_size_kb": 64, 00:13:26.226 "state": "online", 00:13:26.226 "raid_level": "concat", 00:13:26.226 "superblock": true, 00:13:26.226 "num_base_bdevs": 4, 00:13:26.226 "num_base_bdevs_discovered": 4, 00:13:26.226 "num_base_bdevs_operational": 4, 00:13:26.226 "base_bdevs_list": [ 00:13:26.226 { 00:13:26.226 "name": "pt1", 00:13:26.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:26.226 "is_configured": true, 00:13:26.226 "data_offset": 2048, 00:13:26.226 "data_size": 63488 00:13:26.226 }, 00:13:26.226 { 00:13:26.226 "name": "pt2", 00:13:26.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:26.226 "is_configured": true, 00:13:26.226 "data_offset": 2048, 00:13:26.226 "data_size": 63488 00:13:26.226 }, 00:13:26.226 { 00:13:26.226 "name": "pt3", 00:13:26.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:26.226 "is_configured": true, 00:13:26.226 "data_offset": 2048, 00:13:26.226 "data_size": 63488 00:13:26.226 }, 00:13:26.226 { 00:13:26.226 "name": "pt4", 00:13:26.226 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:26.226 "is_configured": true, 00:13:26.226 "data_offset": 2048, 00:13:26.226 "data_size": 63488 00:13:26.226 } 00:13:26.226 ] 00:13:26.226 } 00:13:26.226 } 00:13:26.226 }' 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:26.226 pt2 00:13:26.226 pt3 00:13:26.226 pt4' 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.226 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.487 [2024-12-05 20:06:27.776808] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 72e7dd6f-7322-4e58-84c7-67c20e64de81 '!=' 72e7dd6f-7322-4e58-84c7-67c20e64de81 ']' 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:26.487 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:26.488 20:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72759 00:13:26.488 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72759 ']' 00:13:26.488 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72759 00:13:26.488 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:26.488 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.488 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72759 00:13:26.488 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.488 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.488 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72759' 00:13:26.488 killing process with pid 72759 00:13:26.488 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72759 00:13:26.488 [2024-12-05 20:06:27.860509] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:26.488 [2024-12-05 20:06:27.860661] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.488 20:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72759 00:13:26.488 [2024-12-05 20:06:27.860798] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.488 [2024-12-05 20:06:27.860860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:27.057 [2024-12-05 20:06:28.272681] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.436 20:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:28.436 00:13:28.436 real 0m5.652s 00:13:28.436 user 0m8.084s 00:13:28.436 sys 0m0.988s 00:13:28.436 20:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.436 20:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.436 ************************************ 00:13:28.436 END TEST raid_superblock_test 00:13:28.436 ************************************ 00:13:28.436 20:06:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:13:28.436 20:06:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:28.436 20:06:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.436 20:06:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:28.436 ************************************ 00:13:28.436 START TEST raid_read_error_test 00:13:28.436 ************************************ 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WXqY4jEKPP 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73029 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73029 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73029 ']' 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.436 20:06:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.436 [2024-12-05 20:06:29.624694] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:13:28.436 [2024-12-05 20:06:29.624815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73029 ] 00:13:28.436 [2024-12-05 20:06:29.799610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.696 [2024-12-05 20:06:29.913871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.696 [2024-12-05 20:06:30.117041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.696 [2024-12-05 20:06:30.117192] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.266 BaseBdev1_malloc 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.266 true 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.266 [2024-12-05 20:06:30.523824] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:29.266 [2024-12-05 20:06:30.523879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.266 [2024-12-05 20:06:30.523911] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:29.266 [2024-12-05 20:06:30.523923] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.266 [2024-12-05 20:06:30.526067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.266 [2024-12-05 20:06:30.526107] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:29.266 BaseBdev1 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.266 BaseBdev2_malloc 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.266 true 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.266 [2024-12-05 20:06:30.587869] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:29.266 [2024-12-05 20:06:30.587930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.266 [2024-12-05 20:06:30.587964] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:29.266 [2024-12-05 20:06:30.587975] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.266 [2024-12-05 20:06:30.590184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.266 [2024-12-05 20:06:30.590224] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:29.266 BaseBdev2 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.266 BaseBdev3_malloc 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.266 true 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.266 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.267 [2024-12-05 20:06:30.666263] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:29.267 [2024-12-05 20:06:30.666315] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.267 [2024-12-05 20:06:30.666333] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:29.267 [2024-12-05 20:06:30.666344] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.267 [2024-12-05 20:06:30.668617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.267 [2024-12-05 20:06:30.668694] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:29.267 BaseBdev3 00:13:29.267 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.267 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:29.267 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:29.267 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.267 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.527 BaseBdev4_malloc 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.527 true 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.527 [2024-12-05 20:06:30.734017] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:29.527 [2024-12-05 20:06:30.734066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.527 [2024-12-05 20:06:30.734084] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:29.527 [2024-12-05 20:06:30.734095] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.527 [2024-12-05 20:06:30.736230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.527 [2024-12-05 20:06:30.736337] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:29.527 BaseBdev4 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.527 [2024-12-05 20:06:30.746061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.527 [2024-12-05 20:06:30.747833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.527 [2024-12-05 20:06:30.747920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:29.527 [2024-12-05 20:06:30.747984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:29.527 [2024-12-05 20:06:30.748212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:29.527 [2024-12-05 20:06:30.748228] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:29.527 [2024-12-05 20:06:30.748516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:29.527 [2024-12-05 20:06:30.748700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:29.527 [2024-12-05 20:06:30.748712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:29.527 [2024-12-05 20:06:30.748875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.527 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.528 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.528 "name": "raid_bdev1", 00:13:29.528 "uuid": "b9146b42-037c-40b0-915f-34df0374106c", 00:13:29.528 "strip_size_kb": 64, 00:13:29.528 "state": "online", 00:13:29.528 "raid_level": "concat", 00:13:29.528 "superblock": true, 00:13:29.528 "num_base_bdevs": 4, 00:13:29.528 "num_base_bdevs_discovered": 4, 00:13:29.528 "num_base_bdevs_operational": 4, 00:13:29.528 "base_bdevs_list": [ 00:13:29.528 { 00:13:29.528 "name": "BaseBdev1", 00:13:29.528 "uuid": "5aee2272-5a03-5cbd-9169-86ea6739fc9c", 00:13:29.528 "is_configured": true, 00:13:29.528 "data_offset": 2048, 00:13:29.528 "data_size": 63488 00:13:29.528 }, 00:13:29.528 { 00:13:29.528 "name": "BaseBdev2", 00:13:29.528 "uuid": "77dd3d88-0548-5885-b392-0267549d9d26", 00:13:29.528 "is_configured": true, 00:13:29.528 "data_offset": 2048, 00:13:29.528 "data_size": 63488 00:13:29.528 }, 00:13:29.528 { 00:13:29.528 "name": "BaseBdev3", 00:13:29.528 "uuid": "7ed26492-dc00-501d-b94b-32f6b368629a", 00:13:29.528 "is_configured": true, 00:13:29.528 "data_offset": 2048, 00:13:29.528 "data_size": 63488 00:13:29.528 }, 00:13:29.528 { 00:13:29.528 "name": "BaseBdev4", 00:13:29.528 "uuid": "34985775-a782-5593-a36a-fd97953eb310", 00:13:29.528 "is_configured": true, 00:13:29.528 "data_offset": 2048, 00:13:29.528 "data_size": 63488 00:13:29.528 } 00:13:29.528 ] 00:13:29.528 }' 00:13:29.528 20:06:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.528 20:06:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.788 20:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:29.788 20:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:30.048 [2024-12-05 20:06:31.282615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.987 "name": "raid_bdev1", 00:13:30.987 "uuid": "b9146b42-037c-40b0-915f-34df0374106c", 00:13:30.987 "strip_size_kb": 64, 00:13:30.987 "state": "online", 00:13:30.987 "raid_level": "concat", 00:13:30.987 "superblock": true, 00:13:30.987 "num_base_bdevs": 4, 00:13:30.987 "num_base_bdevs_discovered": 4, 00:13:30.987 "num_base_bdevs_operational": 4, 00:13:30.987 "base_bdevs_list": [ 00:13:30.987 { 00:13:30.987 "name": "BaseBdev1", 00:13:30.987 "uuid": "5aee2272-5a03-5cbd-9169-86ea6739fc9c", 00:13:30.987 "is_configured": true, 00:13:30.987 "data_offset": 2048, 00:13:30.987 "data_size": 63488 00:13:30.987 }, 00:13:30.987 { 00:13:30.987 "name": "BaseBdev2", 00:13:30.987 "uuid": "77dd3d88-0548-5885-b392-0267549d9d26", 00:13:30.987 "is_configured": true, 00:13:30.987 "data_offset": 2048, 00:13:30.987 "data_size": 63488 00:13:30.987 }, 00:13:30.987 { 00:13:30.987 "name": "BaseBdev3", 00:13:30.987 "uuid": "7ed26492-dc00-501d-b94b-32f6b368629a", 00:13:30.987 "is_configured": true, 00:13:30.987 "data_offset": 2048, 00:13:30.987 "data_size": 63488 00:13:30.987 }, 00:13:30.987 { 00:13:30.987 "name": "BaseBdev4", 00:13:30.987 "uuid": "34985775-a782-5593-a36a-fd97953eb310", 00:13:30.987 "is_configured": true, 00:13:30.987 "data_offset": 2048, 00:13:30.987 "data_size": 63488 00:13:30.987 } 00:13:30.987 ] 00:13:30.987 }' 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.987 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.556 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:31.556 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.556 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.556 [2024-12-05 20:06:32.691254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:31.556 [2024-12-05 20:06:32.691289] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.556 [2024-12-05 20:06:32.694583] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.556 [2024-12-05 20:06:32.694695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.556 [2024-12-05 20:06:32.694768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.556 [2024-12-05 20:06:32.694829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:31.556 { 00:13:31.556 "results": [ 00:13:31.556 { 00:13:31.556 "job": "raid_bdev1", 00:13:31.556 "core_mask": "0x1", 00:13:31.556 "workload": "randrw", 00:13:31.556 "percentage": 50, 00:13:31.556 "status": "finished", 00:13:31.556 "queue_depth": 1, 00:13:31.556 "io_size": 131072, 00:13:31.556 "runtime": 1.409307, 00:13:31.556 "iops": 14659.687349881893, 00:13:31.556 "mibps": 1832.4609187352366, 00:13:31.556 "io_failed": 1, 00:13:31.556 "io_timeout": 0, 00:13:31.556 "avg_latency_us": 94.49302778963128, 00:13:31.556 "min_latency_us": 27.276855895196505, 00:13:31.556 "max_latency_us": 1531.0812227074236 00:13:31.556 } 00:13:31.556 ], 00:13:31.556 "core_count": 1 00:13:31.556 } 00:13:31.556 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.556 20:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73029 00:13:31.556 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73029 ']' 00:13:31.556 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73029 00:13:31.556 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:31.556 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.556 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73029 00:13:31.556 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.556 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.556 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73029' 00:13:31.556 killing process with pid 73029 00:13:31.556 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73029 00:13:31.556 20:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73029 00:13:31.556 [2024-12-05 20:06:32.739305] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.815 [2024-12-05 20:06:33.077118] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.192 20:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WXqY4jEKPP 00:13:33.192 20:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:33.192 20:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:33.193 20:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:33.193 ************************************ 00:13:33.193 END TEST raid_read_error_test 00:13:33.193 ************************************ 00:13:33.193 20:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:33.193 20:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:33.193 20:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:33.193 20:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:33.193 00:13:33.193 real 0m4.779s 00:13:33.193 user 0m5.657s 00:13:33.193 sys 0m0.561s 00:13:33.193 20:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.193 20:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.193 20:06:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:13:33.193 20:06:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:33.193 20:06:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.193 20:06:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.193 ************************************ 00:13:33.193 START TEST raid_write_error_test 00:13:33.193 ************************************ 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QMLfr3xdnO 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73175 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73175 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73175 ']' 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.193 20:06:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.193 [2024-12-05 20:06:34.473441] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:13:33.193 [2024-12-05 20:06:34.473652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73175 ] 00:13:33.453 [2024-12-05 20:06:34.644644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.453 [2024-12-05 20:06:34.765045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.712 [2024-12-05 20:06:34.971165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.712 [2024-12-05 20:06:34.971250] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.971 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.971 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:33.971 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:33.971 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:33.971 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.971 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.971 BaseBdev1_malloc 00:13:33.971 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.971 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:33.971 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.971 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.971 true 00:13:33.971 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.971 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:33.971 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.971 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.971 [2024-12-05 20:06:35.373882] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:33.972 [2024-12-05 20:06:35.373945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.972 [2024-12-05 20:06:35.373964] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:33.972 [2024-12-05 20:06:35.373974] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.972 [2024-12-05 20:06:35.376064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.972 [2024-12-05 20:06:35.376103] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:33.972 BaseBdev1 00:13:33.972 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.972 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:33.972 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:33.972 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.972 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.231 BaseBdev2_malloc 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.231 true 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.231 [2024-12-05 20:06:35.439803] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:34.231 [2024-12-05 20:06:35.439858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.231 [2024-12-05 20:06:35.439892] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:34.231 [2024-12-05 20:06:35.439918] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.231 [2024-12-05 20:06:35.442188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.231 [2024-12-05 20:06:35.442226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:34.231 BaseBdev2 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.231 BaseBdev3_malloc 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.231 true 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.231 [2024-12-05 20:06:35.515748] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:34.231 [2024-12-05 20:06:35.515799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.231 [2024-12-05 20:06:35.515816] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:34.231 [2024-12-05 20:06:35.515825] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.231 [2024-12-05 20:06:35.518150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.231 [2024-12-05 20:06:35.518190] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:34.231 BaseBdev3 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.231 BaseBdev4_malloc 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.231 true 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.231 [2024-12-05 20:06:35.583466] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:34.231 [2024-12-05 20:06:35.583521] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.231 [2024-12-05 20:06:35.583539] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:34.231 [2024-12-05 20:06:35.583549] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.231 [2024-12-05 20:06:35.585799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.231 [2024-12-05 20:06:35.585838] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:34.231 BaseBdev4 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.231 [2024-12-05 20:06:35.595508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.231 [2024-12-05 20:06:35.597420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.231 [2024-12-05 20:06:35.597587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:34.231 [2024-12-05 20:06:35.597666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:34.231 [2024-12-05 20:06:35.597932] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:34.231 [2024-12-05 20:06:35.597953] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:34.231 [2024-12-05 20:06:35.598228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:34.231 [2024-12-05 20:06:35.598406] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:34.231 [2024-12-05 20:06:35.598417] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:34.231 [2024-12-05 20:06:35.598575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.231 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.231 "name": "raid_bdev1", 00:13:34.231 "uuid": "89746c41-5811-4387-8e70-0529c707b3eb", 00:13:34.231 "strip_size_kb": 64, 00:13:34.231 "state": "online", 00:13:34.231 "raid_level": "concat", 00:13:34.231 "superblock": true, 00:13:34.231 "num_base_bdevs": 4, 00:13:34.231 "num_base_bdevs_discovered": 4, 00:13:34.231 "num_base_bdevs_operational": 4, 00:13:34.231 "base_bdevs_list": [ 00:13:34.231 { 00:13:34.231 "name": "BaseBdev1", 00:13:34.231 "uuid": "4c49a2d2-8a10-5e97-a396-0c5a12c916bf", 00:13:34.231 "is_configured": true, 00:13:34.231 "data_offset": 2048, 00:13:34.231 "data_size": 63488 00:13:34.231 }, 00:13:34.231 { 00:13:34.231 "name": "BaseBdev2", 00:13:34.231 "uuid": "e837922f-55eb-50fd-9d1b-bda7b2f1b31c", 00:13:34.231 "is_configured": true, 00:13:34.231 "data_offset": 2048, 00:13:34.231 "data_size": 63488 00:13:34.231 }, 00:13:34.231 { 00:13:34.231 "name": "BaseBdev3", 00:13:34.231 "uuid": "0c28026c-6909-5f27-a08d-00b4a18b20e6", 00:13:34.231 "is_configured": true, 00:13:34.231 "data_offset": 2048, 00:13:34.231 "data_size": 63488 00:13:34.231 }, 00:13:34.231 { 00:13:34.232 "name": "BaseBdev4", 00:13:34.232 "uuid": "aafa119e-4b00-55d6-be64-92a7f008ea7c", 00:13:34.232 "is_configured": true, 00:13:34.232 "data_offset": 2048, 00:13:34.232 "data_size": 63488 00:13:34.232 } 00:13:34.232 ] 00:13:34.232 }' 00:13:34.232 20:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.232 20:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.800 20:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:34.800 20:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:34.800 [2024-12-05 20:06:36.104091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.738 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.738 "name": "raid_bdev1", 00:13:35.738 "uuid": "89746c41-5811-4387-8e70-0529c707b3eb", 00:13:35.738 "strip_size_kb": 64, 00:13:35.738 "state": "online", 00:13:35.738 "raid_level": "concat", 00:13:35.738 "superblock": true, 00:13:35.738 "num_base_bdevs": 4, 00:13:35.738 "num_base_bdevs_discovered": 4, 00:13:35.738 "num_base_bdevs_operational": 4, 00:13:35.738 "base_bdevs_list": [ 00:13:35.738 { 00:13:35.738 "name": "BaseBdev1", 00:13:35.738 "uuid": "4c49a2d2-8a10-5e97-a396-0c5a12c916bf", 00:13:35.738 "is_configured": true, 00:13:35.738 "data_offset": 2048, 00:13:35.738 "data_size": 63488 00:13:35.738 }, 00:13:35.738 { 00:13:35.738 "name": "BaseBdev2", 00:13:35.738 "uuid": "e837922f-55eb-50fd-9d1b-bda7b2f1b31c", 00:13:35.738 "is_configured": true, 00:13:35.738 "data_offset": 2048, 00:13:35.738 "data_size": 63488 00:13:35.738 }, 00:13:35.738 { 00:13:35.738 "name": "BaseBdev3", 00:13:35.738 "uuid": "0c28026c-6909-5f27-a08d-00b4a18b20e6", 00:13:35.738 "is_configured": true, 00:13:35.738 "data_offset": 2048, 00:13:35.738 "data_size": 63488 00:13:35.738 }, 00:13:35.738 { 00:13:35.739 "name": "BaseBdev4", 00:13:35.739 "uuid": "aafa119e-4b00-55d6-be64-92a7f008ea7c", 00:13:35.739 "is_configured": true, 00:13:35.739 "data_offset": 2048, 00:13:35.739 "data_size": 63488 00:13:35.739 } 00:13:35.739 ] 00:13:35.739 }' 00:13:35.739 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.739 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.316 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:36.316 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.316 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.316 [2024-12-05 20:06:37.460623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.316 [2024-12-05 20:06:37.460736] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.316 [2024-12-05 20:06:37.463885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.316 [2024-12-05 20:06:37.463979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.316 [2024-12-05 20:06:37.464028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.316 [2024-12-05 20:06:37.464040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:36.316 { 00:13:36.316 "results": [ 00:13:36.316 { 00:13:36.316 "job": "raid_bdev1", 00:13:36.316 "core_mask": "0x1", 00:13:36.316 "workload": "randrw", 00:13:36.316 "percentage": 50, 00:13:36.316 "status": "finished", 00:13:36.316 "queue_depth": 1, 00:13:36.316 "io_size": 131072, 00:13:36.316 "runtime": 1.357491, 00:13:36.316 "iops": 14714.64635861306, 00:13:36.316 "mibps": 1839.3307948266324, 00:13:36.316 "io_failed": 1, 00:13:36.316 "io_timeout": 0, 00:13:36.316 "avg_latency_us": 94.16545898746618, 00:13:36.316 "min_latency_us": 27.165065502183406, 00:13:36.316 "max_latency_us": 1645.5545851528384 00:13:36.316 } 00:13:36.316 ], 00:13:36.316 "core_count": 1 00:13:36.316 } 00:13:36.316 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.316 20:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73175 00:13:36.316 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73175 ']' 00:13:36.316 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73175 00:13:36.316 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:36.316 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:36.316 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73175 00:13:36.316 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:36.316 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:36.316 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73175' 00:13:36.316 killing process with pid 73175 00:13:36.316 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73175 00:13:36.316 [2024-12-05 20:06:37.507265] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:36.316 20:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73175 00:13:36.587 [2024-12-05 20:06:37.831257] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.966 20:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QMLfr3xdnO 00:13:37.966 20:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:37.966 20:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:37.966 20:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:13:37.966 20:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:37.966 20:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:37.966 20:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:37.966 20:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:13:37.966 00:13:37.966 real 0m4.659s 00:13:37.966 user 0m5.462s 00:13:37.966 sys 0m0.567s 00:13:37.966 ************************************ 00:13:37.966 END TEST raid_write_error_test 00:13:37.966 ************************************ 00:13:37.966 20:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.966 20:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.966 20:06:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:37.966 20:06:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:13:37.966 20:06:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:37.966 20:06:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.966 20:06:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.966 ************************************ 00:13:37.966 START TEST raid_state_function_test 00:13:37.966 ************************************ 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73313 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73313' 00:13:37.966 Process raid pid: 73313 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73313 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73313 ']' 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.966 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.967 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.967 [2024-12-05 20:06:39.194687] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:13:37.967 [2024-12-05 20:06:39.194921] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.967 [2024-12-05 20:06:39.347419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.226 [2024-12-05 20:06:39.464974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.486 [2024-12-05 20:06:39.669787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.486 [2024-12-05 20:06:39.669832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.745 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.745 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:38.745 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:38.745 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.746 [2024-12-05 20:06:40.033848] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:38.746 [2024-12-05 20:06:40.033982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:38.746 [2024-12-05 20:06:40.033998] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:38.746 [2024-12-05 20:06:40.034009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:38.746 [2024-12-05 20:06:40.034016] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:38.746 [2024-12-05 20:06:40.034025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:38.746 [2024-12-05 20:06:40.034036] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:38.746 [2024-12-05 20:06:40.034044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.746 "name": "Existed_Raid", 00:13:38.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.746 "strip_size_kb": 0, 00:13:38.746 "state": "configuring", 00:13:38.746 "raid_level": "raid1", 00:13:38.746 "superblock": false, 00:13:38.746 "num_base_bdevs": 4, 00:13:38.746 "num_base_bdevs_discovered": 0, 00:13:38.746 "num_base_bdevs_operational": 4, 00:13:38.746 "base_bdevs_list": [ 00:13:38.746 { 00:13:38.746 "name": "BaseBdev1", 00:13:38.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.746 "is_configured": false, 00:13:38.746 "data_offset": 0, 00:13:38.746 "data_size": 0 00:13:38.746 }, 00:13:38.746 { 00:13:38.746 "name": "BaseBdev2", 00:13:38.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.746 "is_configured": false, 00:13:38.746 "data_offset": 0, 00:13:38.746 "data_size": 0 00:13:38.746 }, 00:13:38.746 { 00:13:38.746 "name": "BaseBdev3", 00:13:38.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.746 "is_configured": false, 00:13:38.746 "data_offset": 0, 00:13:38.746 "data_size": 0 00:13:38.746 }, 00:13:38.746 { 00:13:38.746 "name": "BaseBdev4", 00:13:38.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.746 "is_configured": false, 00:13:38.746 "data_offset": 0, 00:13:38.746 "data_size": 0 00:13:38.746 } 00:13:38.746 ] 00:13:38.746 }' 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.746 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.316 [2024-12-05 20:06:40.477041] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.316 [2024-12-05 20:06:40.477130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.316 [2024-12-05 20:06:40.489015] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:39.316 [2024-12-05 20:06:40.489093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:39.316 [2024-12-05 20:06:40.489125] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.316 [2024-12-05 20:06:40.489149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.316 [2024-12-05 20:06:40.489197] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:39.316 [2024-12-05 20:06:40.489221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:39.316 [2024-12-05 20:06:40.489261] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:39.316 [2024-12-05 20:06:40.489293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.316 [2024-12-05 20:06:40.536513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.316 BaseBdev1 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.316 [ 00:13:39.316 { 00:13:39.316 "name": "BaseBdev1", 00:13:39.316 "aliases": [ 00:13:39.316 "28571328-a726-49d5-a770-72d1657d4ee4" 00:13:39.316 ], 00:13:39.316 "product_name": "Malloc disk", 00:13:39.316 "block_size": 512, 00:13:39.316 "num_blocks": 65536, 00:13:39.316 "uuid": "28571328-a726-49d5-a770-72d1657d4ee4", 00:13:39.316 "assigned_rate_limits": { 00:13:39.316 "rw_ios_per_sec": 0, 00:13:39.316 "rw_mbytes_per_sec": 0, 00:13:39.316 "r_mbytes_per_sec": 0, 00:13:39.316 "w_mbytes_per_sec": 0 00:13:39.316 }, 00:13:39.316 "claimed": true, 00:13:39.316 "claim_type": "exclusive_write", 00:13:39.316 "zoned": false, 00:13:39.316 "supported_io_types": { 00:13:39.316 "read": true, 00:13:39.316 "write": true, 00:13:39.316 "unmap": true, 00:13:39.316 "flush": true, 00:13:39.316 "reset": true, 00:13:39.316 "nvme_admin": false, 00:13:39.316 "nvme_io": false, 00:13:39.316 "nvme_io_md": false, 00:13:39.316 "write_zeroes": true, 00:13:39.316 "zcopy": true, 00:13:39.316 "get_zone_info": false, 00:13:39.316 "zone_management": false, 00:13:39.316 "zone_append": false, 00:13:39.316 "compare": false, 00:13:39.316 "compare_and_write": false, 00:13:39.316 "abort": true, 00:13:39.316 "seek_hole": false, 00:13:39.316 "seek_data": false, 00:13:39.316 "copy": true, 00:13:39.316 "nvme_iov_md": false 00:13:39.316 }, 00:13:39.316 "memory_domains": [ 00:13:39.316 { 00:13:39.316 "dma_device_id": "system", 00:13:39.316 "dma_device_type": 1 00:13:39.316 }, 00:13:39.316 { 00:13:39.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.316 "dma_device_type": 2 00:13:39.316 } 00:13:39.316 ], 00:13:39.316 "driver_specific": {} 00:13:39.316 } 00:13:39.316 ] 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:39.316 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.317 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.317 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.317 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.317 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.317 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.317 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.317 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.317 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.317 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.317 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.317 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.317 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.317 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.317 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.317 "name": "Existed_Raid", 00:13:39.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.317 "strip_size_kb": 0, 00:13:39.317 "state": "configuring", 00:13:39.317 "raid_level": "raid1", 00:13:39.317 "superblock": false, 00:13:39.317 "num_base_bdevs": 4, 00:13:39.317 "num_base_bdevs_discovered": 1, 00:13:39.317 "num_base_bdevs_operational": 4, 00:13:39.317 "base_bdevs_list": [ 00:13:39.317 { 00:13:39.317 "name": "BaseBdev1", 00:13:39.317 "uuid": "28571328-a726-49d5-a770-72d1657d4ee4", 00:13:39.317 "is_configured": true, 00:13:39.317 "data_offset": 0, 00:13:39.317 "data_size": 65536 00:13:39.317 }, 00:13:39.317 { 00:13:39.317 "name": "BaseBdev2", 00:13:39.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.317 "is_configured": false, 00:13:39.317 "data_offset": 0, 00:13:39.317 "data_size": 0 00:13:39.317 }, 00:13:39.317 { 00:13:39.317 "name": "BaseBdev3", 00:13:39.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.317 "is_configured": false, 00:13:39.317 "data_offset": 0, 00:13:39.317 "data_size": 0 00:13:39.317 }, 00:13:39.317 { 00:13:39.317 "name": "BaseBdev4", 00:13:39.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.317 "is_configured": false, 00:13:39.317 "data_offset": 0, 00:13:39.317 "data_size": 0 00:13:39.317 } 00:13:39.317 ] 00:13:39.317 }' 00:13:39.317 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.317 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.577 [2024-12-05 20:06:40.971828] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.577 [2024-12-05 20:06:40.971879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.577 [2024-12-05 20:06:40.979844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.577 [2024-12-05 20:06:40.981756] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.577 [2024-12-05 20:06:40.981798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.577 [2024-12-05 20:06:40.981808] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:39.577 [2024-12-05 20:06:40.981818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:39.577 [2024-12-05 20:06:40.981825] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:39.577 [2024-12-05 20:06:40.981833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.577 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.578 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.578 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.578 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.578 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.836 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.837 "name": "Existed_Raid", 00:13:39.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.837 "strip_size_kb": 0, 00:13:39.837 "state": "configuring", 00:13:39.837 "raid_level": "raid1", 00:13:39.837 "superblock": false, 00:13:39.837 "num_base_bdevs": 4, 00:13:39.837 "num_base_bdevs_discovered": 1, 00:13:39.837 "num_base_bdevs_operational": 4, 00:13:39.837 "base_bdevs_list": [ 00:13:39.837 { 00:13:39.837 "name": "BaseBdev1", 00:13:39.837 "uuid": "28571328-a726-49d5-a770-72d1657d4ee4", 00:13:39.837 "is_configured": true, 00:13:39.837 "data_offset": 0, 00:13:39.837 "data_size": 65536 00:13:39.837 }, 00:13:39.837 { 00:13:39.837 "name": "BaseBdev2", 00:13:39.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.837 "is_configured": false, 00:13:39.837 "data_offset": 0, 00:13:39.837 "data_size": 0 00:13:39.837 }, 00:13:39.837 { 00:13:39.837 "name": "BaseBdev3", 00:13:39.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.837 "is_configured": false, 00:13:39.837 "data_offset": 0, 00:13:39.837 "data_size": 0 00:13:39.837 }, 00:13:39.837 { 00:13:39.837 "name": "BaseBdev4", 00:13:39.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.837 "is_configured": false, 00:13:39.837 "data_offset": 0, 00:13:39.837 "data_size": 0 00:13:39.837 } 00:13:39.837 ] 00:13:39.837 }' 00:13:39.837 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.837 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.096 [2024-12-05 20:06:41.485298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:40.096 BaseBdev2 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.096 [ 00:13:40.096 { 00:13:40.096 "name": "BaseBdev2", 00:13:40.096 "aliases": [ 00:13:40.096 "14966648-100f-45ed-8054-18aee13b3b06" 00:13:40.096 ], 00:13:40.096 "product_name": "Malloc disk", 00:13:40.096 "block_size": 512, 00:13:40.096 "num_blocks": 65536, 00:13:40.096 "uuid": "14966648-100f-45ed-8054-18aee13b3b06", 00:13:40.096 "assigned_rate_limits": { 00:13:40.096 "rw_ios_per_sec": 0, 00:13:40.096 "rw_mbytes_per_sec": 0, 00:13:40.096 "r_mbytes_per_sec": 0, 00:13:40.096 "w_mbytes_per_sec": 0 00:13:40.096 }, 00:13:40.096 "claimed": true, 00:13:40.096 "claim_type": "exclusive_write", 00:13:40.096 "zoned": false, 00:13:40.096 "supported_io_types": { 00:13:40.096 "read": true, 00:13:40.096 "write": true, 00:13:40.096 "unmap": true, 00:13:40.096 "flush": true, 00:13:40.096 "reset": true, 00:13:40.096 "nvme_admin": false, 00:13:40.096 "nvme_io": false, 00:13:40.096 "nvme_io_md": false, 00:13:40.096 "write_zeroes": true, 00:13:40.096 "zcopy": true, 00:13:40.096 "get_zone_info": false, 00:13:40.096 "zone_management": false, 00:13:40.096 "zone_append": false, 00:13:40.096 "compare": false, 00:13:40.096 "compare_and_write": false, 00:13:40.096 "abort": true, 00:13:40.096 "seek_hole": false, 00:13:40.096 "seek_data": false, 00:13:40.096 "copy": true, 00:13:40.096 "nvme_iov_md": false 00:13:40.096 }, 00:13:40.096 "memory_domains": [ 00:13:40.096 { 00:13:40.096 "dma_device_id": "system", 00:13:40.096 "dma_device_type": 1 00:13:40.096 }, 00:13:40.096 { 00:13:40.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.096 "dma_device_type": 2 00:13:40.096 } 00:13:40.096 ], 00:13:40.096 "driver_specific": {} 00:13:40.096 } 00:13:40.096 ] 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.096 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.355 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.355 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.355 "name": "Existed_Raid", 00:13:40.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.355 "strip_size_kb": 0, 00:13:40.355 "state": "configuring", 00:13:40.355 "raid_level": "raid1", 00:13:40.355 "superblock": false, 00:13:40.355 "num_base_bdevs": 4, 00:13:40.355 "num_base_bdevs_discovered": 2, 00:13:40.355 "num_base_bdevs_operational": 4, 00:13:40.355 "base_bdevs_list": [ 00:13:40.355 { 00:13:40.355 "name": "BaseBdev1", 00:13:40.355 "uuid": "28571328-a726-49d5-a770-72d1657d4ee4", 00:13:40.355 "is_configured": true, 00:13:40.355 "data_offset": 0, 00:13:40.355 "data_size": 65536 00:13:40.355 }, 00:13:40.355 { 00:13:40.355 "name": "BaseBdev2", 00:13:40.355 "uuid": "14966648-100f-45ed-8054-18aee13b3b06", 00:13:40.355 "is_configured": true, 00:13:40.355 "data_offset": 0, 00:13:40.355 "data_size": 65536 00:13:40.355 }, 00:13:40.355 { 00:13:40.355 "name": "BaseBdev3", 00:13:40.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.355 "is_configured": false, 00:13:40.355 "data_offset": 0, 00:13:40.355 "data_size": 0 00:13:40.355 }, 00:13:40.355 { 00:13:40.355 "name": "BaseBdev4", 00:13:40.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.355 "is_configured": false, 00:13:40.355 "data_offset": 0, 00:13:40.355 "data_size": 0 00:13:40.355 } 00:13:40.355 ] 00:13:40.355 }' 00:13:40.355 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.355 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.724 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:40.724 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.724 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.724 [2024-12-05 20:06:42.013853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:40.724 BaseBdev3 00:13:40.724 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.724 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:40.724 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:40.724 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:40.724 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:40.724 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:40.724 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:40.724 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:40.724 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.724 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.724 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.724 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:40.724 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.724 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.724 [ 00:13:40.724 { 00:13:40.724 "name": "BaseBdev3", 00:13:40.724 "aliases": [ 00:13:40.724 "8ec967c6-9306-4393-b13b-5f410fdb2b72" 00:13:40.724 ], 00:13:40.724 "product_name": "Malloc disk", 00:13:40.724 "block_size": 512, 00:13:40.724 "num_blocks": 65536, 00:13:40.724 "uuid": "8ec967c6-9306-4393-b13b-5f410fdb2b72", 00:13:40.724 "assigned_rate_limits": { 00:13:40.724 "rw_ios_per_sec": 0, 00:13:40.724 "rw_mbytes_per_sec": 0, 00:13:40.724 "r_mbytes_per_sec": 0, 00:13:40.724 "w_mbytes_per_sec": 0 00:13:40.724 }, 00:13:40.724 "claimed": true, 00:13:40.724 "claim_type": "exclusive_write", 00:13:40.724 "zoned": false, 00:13:40.724 "supported_io_types": { 00:13:40.724 "read": true, 00:13:40.725 "write": true, 00:13:40.725 "unmap": true, 00:13:40.725 "flush": true, 00:13:40.725 "reset": true, 00:13:40.725 "nvme_admin": false, 00:13:40.725 "nvme_io": false, 00:13:40.725 "nvme_io_md": false, 00:13:40.725 "write_zeroes": true, 00:13:40.725 "zcopy": true, 00:13:40.725 "get_zone_info": false, 00:13:40.725 "zone_management": false, 00:13:40.725 "zone_append": false, 00:13:40.725 "compare": false, 00:13:40.725 "compare_and_write": false, 00:13:40.725 "abort": true, 00:13:40.725 "seek_hole": false, 00:13:40.725 "seek_data": false, 00:13:40.725 "copy": true, 00:13:40.725 "nvme_iov_md": false 00:13:40.725 }, 00:13:40.725 "memory_domains": [ 00:13:40.725 { 00:13:40.725 "dma_device_id": "system", 00:13:40.725 "dma_device_type": 1 00:13:40.725 }, 00:13:40.725 { 00:13:40.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.725 "dma_device_type": 2 00:13:40.725 } 00:13:40.725 ], 00:13:40.725 "driver_specific": {} 00:13:40.725 } 00:13:40.725 ] 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.725 "name": "Existed_Raid", 00:13:40.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.725 "strip_size_kb": 0, 00:13:40.725 "state": "configuring", 00:13:40.725 "raid_level": "raid1", 00:13:40.725 "superblock": false, 00:13:40.725 "num_base_bdevs": 4, 00:13:40.725 "num_base_bdevs_discovered": 3, 00:13:40.725 "num_base_bdevs_operational": 4, 00:13:40.725 "base_bdevs_list": [ 00:13:40.725 { 00:13:40.725 "name": "BaseBdev1", 00:13:40.725 "uuid": "28571328-a726-49d5-a770-72d1657d4ee4", 00:13:40.725 "is_configured": true, 00:13:40.725 "data_offset": 0, 00:13:40.725 "data_size": 65536 00:13:40.725 }, 00:13:40.725 { 00:13:40.725 "name": "BaseBdev2", 00:13:40.725 "uuid": "14966648-100f-45ed-8054-18aee13b3b06", 00:13:40.725 "is_configured": true, 00:13:40.725 "data_offset": 0, 00:13:40.725 "data_size": 65536 00:13:40.725 }, 00:13:40.725 { 00:13:40.725 "name": "BaseBdev3", 00:13:40.725 "uuid": "8ec967c6-9306-4393-b13b-5f410fdb2b72", 00:13:40.725 "is_configured": true, 00:13:40.725 "data_offset": 0, 00:13:40.725 "data_size": 65536 00:13:40.725 }, 00:13:40.725 { 00:13:40.725 "name": "BaseBdev4", 00:13:40.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.725 "is_configured": false, 00:13:40.725 "data_offset": 0, 00:13:40.725 "data_size": 0 00:13:40.725 } 00:13:40.725 ] 00:13:40.725 }' 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.725 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.304 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:41.304 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.304 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.304 [2024-12-05 20:06:42.494963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:41.304 [2024-12-05 20:06:42.495021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:41.304 [2024-12-05 20:06:42.495030] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:41.304 [2024-12-05 20:06:42.495309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:41.304 [2024-12-05 20:06:42.495484] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:41.304 [2024-12-05 20:06:42.495500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:41.304 [2024-12-05 20:06:42.495771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.304 BaseBdev4 00:13:41.304 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.304 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:41.304 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:41.304 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:41.304 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:41.304 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:41.304 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:41.304 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:41.304 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.304 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.304 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.305 [ 00:13:41.305 { 00:13:41.305 "name": "BaseBdev4", 00:13:41.305 "aliases": [ 00:13:41.305 "e7d3a45a-21fc-4fad-9c51-7b216ba498f9" 00:13:41.305 ], 00:13:41.305 "product_name": "Malloc disk", 00:13:41.305 "block_size": 512, 00:13:41.305 "num_blocks": 65536, 00:13:41.305 "uuid": "e7d3a45a-21fc-4fad-9c51-7b216ba498f9", 00:13:41.305 "assigned_rate_limits": { 00:13:41.305 "rw_ios_per_sec": 0, 00:13:41.305 "rw_mbytes_per_sec": 0, 00:13:41.305 "r_mbytes_per_sec": 0, 00:13:41.305 "w_mbytes_per_sec": 0 00:13:41.305 }, 00:13:41.305 "claimed": true, 00:13:41.305 "claim_type": "exclusive_write", 00:13:41.305 "zoned": false, 00:13:41.305 "supported_io_types": { 00:13:41.305 "read": true, 00:13:41.305 "write": true, 00:13:41.305 "unmap": true, 00:13:41.305 "flush": true, 00:13:41.305 "reset": true, 00:13:41.305 "nvme_admin": false, 00:13:41.305 "nvme_io": false, 00:13:41.305 "nvme_io_md": false, 00:13:41.305 "write_zeroes": true, 00:13:41.305 "zcopy": true, 00:13:41.305 "get_zone_info": false, 00:13:41.305 "zone_management": false, 00:13:41.305 "zone_append": false, 00:13:41.305 "compare": false, 00:13:41.305 "compare_and_write": false, 00:13:41.305 "abort": true, 00:13:41.305 "seek_hole": false, 00:13:41.305 "seek_data": false, 00:13:41.305 "copy": true, 00:13:41.305 "nvme_iov_md": false 00:13:41.305 }, 00:13:41.305 "memory_domains": [ 00:13:41.305 { 00:13:41.305 "dma_device_id": "system", 00:13:41.305 "dma_device_type": 1 00:13:41.305 }, 00:13:41.305 { 00:13:41.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.305 "dma_device_type": 2 00:13:41.305 } 00:13:41.305 ], 00:13:41.305 "driver_specific": {} 00:13:41.305 } 00:13:41.305 ] 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.305 "name": "Existed_Raid", 00:13:41.305 "uuid": "49df1b6f-c5cc-49f8-8ff4-1ba43744a1bc", 00:13:41.305 "strip_size_kb": 0, 00:13:41.305 "state": "online", 00:13:41.305 "raid_level": "raid1", 00:13:41.305 "superblock": false, 00:13:41.305 "num_base_bdevs": 4, 00:13:41.305 "num_base_bdevs_discovered": 4, 00:13:41.305 "num_base_bdevs_operational": 4, 00:13:41.305 "base_bdevs_list": [ 00:13:41.305 { 00:13:41.305 "name": "BaseBdev1", 00:13:41.305 "uuid": "28571328-a726-49d5-a770-72d1657d4ee4", 00:13:41.305 "is_configured": true, 00:13:41.305 "data_offset": 0, 00:13:41.305 "data_size": 65536 00:13:41.305 }, 00:13:41.305 { 00:13:41.305 "name": "BaseBdev2", 00:13:41.305 "uuid": "14966648-100f-45ed-8054-18aee13b3b06", 00:13:41.305 "is_configured": true, 00:13:41.305 "data_offset": 0, 00:13:41.305 "data_size": 65536 00:13:41.305 }, 00:13:41.305 { 00:13:41.305 "name": "BaseBdev3", 00:13:41.305 "uuid": "8ec967c6-9306-4393-b13b-5f410fdb2b72", 00:13:41.305 "is_configured": true, 00:13:41.305 "data_offset": 0, 00:13:41.305 "data_size": 65536 00:13:41.305 }, 00:13:41.305 { 00:13:41.305 "name": "BaseBdev4", 00:13:41.305 "uuid": "e7d3a45a-21fc-4fad-9c51-7b216ba498f9", 00:13:41.305 "is_configured": true, 00:13:41.305 "data_offset": 0, 00:13:41.305 "data_size": 65536 00:13:41.305 } 00:13:41.305 ] 00:13:41.305 }' 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.305 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.566 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:41.566 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:41.566 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:41.566 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:41.566 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:41.566 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:41.566 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:41.566 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:41.566 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.566 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.566 [2024-12-05 20:06:42.998645] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.826 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.826 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:41.826 "name": "Existed_Raid", 00:13:41.826 "aliases": [ 00:13:41.826 "49df1b6f-c5cc-49f8-8ff4-1ba43744a1bc" 00:13:41.826 ], 00:13:41.826 "product_name": "Raid Volume", 00:13:41.826 "block_size": 512, 00:13:41.826 "num_blocks": 65536, 00:13:41.826 "uuid": "49df1b6f-c5cc-49f8-8ff4-1ba43744a1bc", 00:13:41.826 "assigned_rate_limits": { 00:13:41.826 "rw_ios_per_sec": 0, 00:13:41.826 "rw_mbytes_per_sec": 0, 00:13:41.826 "r_mbytes_per_sec": 0, 00:13:41.826 "w_mbytes_per_sec": 0 00:13:41.826 }, 00:13:41.826 "claimed": false, 00:13:41.826 "zoned": false, 00:13:41.826 "supported_io_types": { 00:13:41.826 "read": true, 00:13:41.826 "write": true, 00:13:41.826 "unmap": false, 00:13:41.826 "flush": false, 00:13:41.826 "reset": true, 00:13:41.826 "nvme_admin": false, 00:13:41.826 "nvme_io": false, 00:13:41.826 "nvme_io_md": false, 00:13:41.826 "write_zeroes": true, 00:13:41.826 "zcopy": false, 00:13:41.826 "get_zone_info": false, 00:13:41.826 "zone_management": false, 00:13:41.826 "zone_append": false, 00:13:41.826 "compare": false, 00:13:41.826 "compare_and_write": false, 00:13:41.826 "abort": false, 00:13:41.826 "seek_hole": false, 00:13:41.826 "seek_data": false, 00:13:41.826 "copy": false, 00:13:41.826 "nvme_iov_md": false 00:13:41.826 }, 00:13:41.826 "memory_domains": [ 00:13:41.826 { 00:13:41.826 "dma_device_id": "system", 00:13:41.826 "dma_device_type": 1 00:13:41.826 }, 00:13:41.826 { 00:13:41.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.826 "dma_device_type": 2 00:13:41.826 }, 00:13:41.826 { 00:13:41.826 "dma_device_id": "system", 00:13:41.826 "dma_device_type": 1 00:13:41.826 }, 00:13:41.826 { 00:13:41.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.826 "dma_device_type": 2 00:13:41.826 }, 00:13:41.826 { 00:13:41.826 "dma_device_id": "system", 00:13:41.826 "dma_device_type": 1 00:13:41.826 }, 00:13:41.826 { 00:13:41.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.826 "dma_device_type": 2 00:13:41.826 }, 00:13:41.826 { 00:13:41.826 "dma_device_id": "system", 00:13:41.826 "dma_device_type": 1 00:13:41.826 }, 00:13:41.826 { 00:13:41.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.826 "dma_device_type": 2 00:13:41.826 } 00:13:41.826 ], 00:13:41.826 "driver_specific": { 00:13:41.826 "raid": { 00:13:41.826 "uuid": "49df1b6f-c5cc-49f8-8ff4-1ba43744a1bc", 00:13:41.826 "strip_size_kb": 0, 00:13:41.826 "state": "online", 00:13:41.826 "raid_level": "raid1", 00:13:41.826 "superblock": false, 00:13:41.826 "num_base_bdevs": 4, 00:13:41.826 "num_base_bdevs_discovered": 4, 00:13:41.826 "num_base_bdevs_operational": 4, 00:13:41.826 "base_bdevs_list": [ 00:13:41.826 { 00:13:41.826 "name": "BaseBdev1", 00:13:41.826 "uuid": "28571328-a726-49d5-a770-72d1657d4ee4", 00:13:41.826 "is_configured": true, 00:13:41.826 "data_offset": 0, 00:13:41.826 "data_size": 65536 00:13:41.826 }, 00:13:41.826 { 00:13:41.826 "name": "BaseBdev2", 00:13:41.826 "uuid": "14966648-100f-45ed-8054-18aee13b3b06", 00:13:41.826 "is_configured": true, 00:13:41.826 "data_offset": 0, 00:13:41.826 "data_size": 65536 00:13:41.826 }, 00:13:41.826 { 00:13:41.826 "name": "BaseBdev3", 00:13:41.826 "uuid": "8ec967c6-9306-4393-b13b-5f410fdb2b72", 00:13:41.826 "is_configured": true, 00:13:41.826 "data_offset": 0, 00:13:41.826 "data_size": 65536 00:13:41.826 }, 00:13:41.826 { 00:13:41.826 "name": "BaseBdev4", 00:13:41.826 "uuid": "e7d3a45a-21fc-4fad-9c51-7b216ba498f9", 00:13:41.826 "is_configured": true, 00:13:41.826 "data_offset": 0, 00:13:41.826 "data_size": 65536 00:13:41.826 } 00:13:41.826 ] 00:13:41.826 } 00:13:41.826 } 00:13:41.826 }' 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:41.827 BaseBdev2 00:13:41.827 BaseBdev3 00:13:41.827 BaseBdev4' 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.827 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.087 [2024-12-05 20:06:43.285761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.087 "name": "Existed_Raid", 00:13:42.087 "uuid": "49df1b6f-c5cc-49f8-8ff4-1ba43744a1bc", 00:13:42.087 "strip_size_kb": 0, 00:13:42.087 "state": "online", 00:13:42.087 "raid_level": "raid1", 00:13:42.087 "superblock": false, 00:13:42.087 "num_base_bdevs": 4, 00:13:42.087 "num_base_bdevs_discovered": 3, 00:13:42.087 "num_base_bdevs_operational": 3, 00:13:42.087 "base_bdevs_list": [ 00:13:42.087 { 00:13:42.087 "name": null, 00:13:42.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.087 "is_configured": false, 00:13:42.087 "data_offset": 0, 00:13:42.087 "data_size": 65536 00:13:42.087 }, 00:13:42.087 { 00:13:42.087 "name": "BaseBdev2", 00:13:42.087 "uuid": "14966648-100f-45ed-8054-18aee13b3b06", 00:13:42.087 "is_configured": true, 00:13:42.087 "data_offset": 0, 00:13:42.087 "data_size": 65536 00:13:42.087 }, 00:13:42.087 { 00:13:42.087 "name": "BaseBdev3", 00:13:42.087 "uuid": "8ec967c6-9306-4393-b13b-5f410fdb2b72", 00:13:42.087 "is_configured": true, 00:13:42.087 "data_offset": 0, 00:13:42.087 "data_size": 65536 00:13:42.087 }, 00:13:42.087 { 00:13:42.087 "name": "BaseBdev4", 00:13:42.087 "uuid": "e7d3a45a-21fc-4fad-9c51-7b216ba498f9", 00:13:42.087 "is_configured": true, 00:13:42.087 "data_offset": 0, 00:13:42.087 "data_size": 65536 00:13:42.087 } 00:13:42.087 ] 00:13:42.087 }' 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.087 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.658 [2024-12-05 20:06:43.888450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.658 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.658 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.658 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:42.658 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:42.658 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:42.658 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.658 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.658 [2024-12-05 20:06:44.049769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.918 [2024-12-05 20:06:44.211631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:42.918 [2024-12-05 20:06:44.211727] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.918 [2024-12-05 20:06:44.310403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.918 [2024-12-05 20:06:44.310471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.918 [2024-12-05 20:06:44.310493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.918 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.178 BaseBdev2 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.178 [ 00:13:43.178 { 00:13:43.178 "name": "BaseBdev2", 00:13:43.178 "aliases": [ 00:13:43.178 "05975884-35a9-48a7-8b90-3edda10dcdec" 00:13:43.178 ], 00:13:43.178 "product_name": "Malloc disk", 00:13:43.178 "block_size": 512, 00:13:43.178 "num_blocks": 65536, 00:13:43.178 "uuid": "05975884-35a9-48a7-8b90-3edda10dcdec", 00:13:43.178 "assigned_rate_limits": { 00:13:43.178 "rw_ios_per_sec": 0, 00:13:43.178 "rw_mbytes_per_sec": 0, 00:13:43.178 "r_mbytes_per_sec": 0, 00:13:43.178 "w_mbytes_per_sec": 0 00:13:43.178 }, 00:13:43.178 "claimed": false, 00:13:43.178 "zoned": false, 00:13:43.178 "supported_io_types": { 00:13:43.178 "read": true, 00:13:43.178 "write": true, 00:13:43.178 "unmap": true, 00:13:43.178 "flush": true, 00:13:43.178 "reset": true, 00:13:43.178 "nvme_admin": false, 00:13:43.178 "nvme_io": false, 00:13:43.178 "nvme_io_md": false, 00:13:43.178 "write_zeroes": true, 00:13:43.178 "zcopy": true, 00:13:43.178 "get_zone_info": false, 00:13:43.178 "zone_management": false, 00:13:43.178 "zone_append": false, 00:13:43.178 "compare": false, 00:13:43.178 "compare_and_write": false, 00:13:43.178 "abort": true, 00:13:43.178 "seek_hole": false, 00:13:43.178 "seek_data": false, 00:13:43.178 "copy": true, 00:13:43.178 "nvme_iov_md": false 00:13:43.178 }, 00:13:43.178 "memory_domains": [ 00:13:43.178 { 00:13:43.178 "dma_device_id": "system", 00:13:43.178 "dma_device_type": 1 00:13:43.178 }, 00:13:43.178 { 00:13:43.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.178 "dma_device_type": 2 00:13:43.178 } 00:13:43.178 ], 00:13:43.178 "driver_specific": {} 00:13:43.178 } 00:13:43.178 ] 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.178 BaseBdev3 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.178 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.178 [ 00:13:43.178 { 00:13:43.179 "name": "BaseBdev3", 00:13:43.179 "aliases": [ 00:13:43.179 "996a5e13-1355-457f-a629-9aa534ad75fa" 00:13:43.179 ], 00:13:43.179 "product_name": "Malloc disk", 00:13:43.179 "block_size": 512, 00:13:43.179 "num_blocks": 65536, 00:13:43.179 "uuid": "996a5e13-1355-457f-a629-9aa534ad75fa", 00:13:43.179 "assigned_rate_limits": { 00:13:43.179 "rw_ios_per_sec": 0, 00:13:43.179 "rw_mbytes_per_sec": 0, 00:13:43.179 "r_mbytes_per_sec": 0, 00:13:43.179 "w_mbytes_per_sec": 0 00:13:43.179 }, 00:13:43.179 "claimed": false, 00:13:43.179 "zoned": false, 00:13:43.179 "supported_io_types": { 00:13:43.179 "read": true, 00:13:43.179 "write": true, 00:13:43.179 "unmap": true, 00:13:43.179 "flush": true, 00:13:43.179 "reset": true, 00:13:43.179 "nvme_admin": false, 00:13:43.179 "nvme_io": false, 00:13:43.179 "nvme_io_md": false, 00:13:43.179 "write_zeroes": true, 00:13:43.179 "zcopy": true, 00:13:43.179 "get_zone_info": false, 00:13:43.179 "zone_management": false, 00:13:43.179 "zone_append": false, 00:13:43.179 "compare": false, 00:13:43.179 "compare_and_write": false, 00:13:43.179 "abort": true, 00:13:43.179 "seek_hole": false, 00:13:43.179 "seek_data": false, 00:13:43.179 "copy": true, 00:13:43.179 "nvme_iov_md": false 00:13:43.179 }, 00:13:43.179 "memory_domains": [ 00:13:43.179 { 00:13:43.179 "dma_device_id": "system", 00:13:43.179 "dma_device_type": 1 00:13:43.179 }, 00:13:43.179 { 00:13:43.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.179 "dma_device_type": 2 00:13:43.179 } 00:13:43.179 ], 00:13:43.179 "driver_specific": {} 00:13:43.179 } 00:13:43.179 ] 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.179 BaseBdev4 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.179 [ 00:13:43.179 { 00:13:43.179 "name": "BaseBdev4", 00:13:43.179 "aliases": [ 00:13:43.179 "ec64fb42-c01e-45a3-be5b-81eb69a3be79" 00:13:43.179 ], 00:13:43.179 "product_name": "Malloc disk", 00:13:43.179 "block_size": 512, 00:13:43.179 "num_blocks": 65536, 00:13:43.179 "uuid": "ec64fb42-c01e-45a3-be5b-81eb69a3be79", 00:13:43.179 "assigned_rate_limits": { 00:13:43.179 "rw_ios_per_sec": 0, 00:13:43.179 "rw_mbytes_per_sec": 0, 00:13:43.179 "r_mbytes_per_sec": 0, 00:13:43.179 "w_mbytes_per_sec": 0 00:13:43.179 }, 00:13:43.179 "claimed": false, 00:13:43.179 "zoned": false, 00:13:43.179 "supported_io_types": { 00:13:43.179 "read": true, 00:13:43.179 "write": true, 00:13:43.179 "unmap": true, 00:13:43.179 "flush": true, 00:13:43.179 "reset": true, 00:13:43.179 "nvme_admin": false, 00:13:43.179 "nvme_io": false, 00:13:43.179 "nvme_io_md": false, 00:13:43.179 "write_zeroes": true, 00:13:43.179 "zcopy": true, 00:13:43.179 "get_zone_info": false, 00:13:43.179 "zone_management": false, 00:13:43.179 "zone_append": false, 00:13:43.179 "compare": false, 00:13:43.179 "compare_and_write": false, 00:13:43.179 "abort": true, 00:13:43.179 "seek_hole": false, 00:13:43.179 "seek_data": false, 00:13:43.179 "copy": true, 00:13:43.179 "nvme_iov_md": false 00:13:43.179 }, 00:13:43.179 "memory_domains": [ 00:13:43.179 { 00:13:43.179 "dma_device_id": "system", 00:13:43.179 "dma_device_type": 1 00:13:43.179 }, 00:13:43.179 { 00:13:43.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.179 "dma_device_type": 2 00:13:43.179 } 00:13:43.179 ], 00:13:43.179 "driver_specific": {} 00:13:43.179 } 00:13:43.179 ] 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.179 [2024-12-05 20:06:44.595667] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:43.179 [2024-12-05 20:06:44.595711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:43.179 [2024-12-05 20:06:44.595733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:43.179 [2024-12-05 20:06:44.597643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:43.179 [2024-12-05 20:06:44.597702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.179 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.439 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.439 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.439 "name": "Existed_Raid", 00:13:43.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.439 "strip_size_kb": 0, 00:13:43.439 "state": "configuring", 00:13:43.439 "raid_level": "raid1", 00:13:43.439 "superblock": false, 00:13:43.439 "num_base_bdevs": 4, 00:13:43.439 "num_base_bdevs_discovered": 3, 00:13:43.439 "num_base_bdevs_operational": 4, 00:13:43.439 "base_bdevs_list": [ 00:13:43.439 { 00:13:43.439 "name": "BaseBdev1", 00:13:43.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.439 "is_configured": false, 00:13:43.439 "data_offset": 0, 00:13:43.439 "data_size": 0 00:13:43.439 }, 00:13:43.439 { 00:13:43.439 "name": "BaseBdev2", 00:13:43.439 "uuid": "05975884-35a9-48a7-8b90-3edda10dcdec", 00:13:43.439 "is_configured": true, 00:13:43.439 "data_offset": 0, 00:13:43.439 "data_size": 65536 00:13:43.439 }, 00:13:43.439 { 00:13:43.439 "name": "BaseBdev3", 00:13:43.439 "uuid": "996a5e13-1355-457f-a629-9aa534ad75fa", 00:13:43.439 "is_configured": true, 00:13:43.439 "data_offset": 0, 00:13:43.439 "data_size": 65536 00:13:43.439 }, 00:13:43.439 { 00:13:43.439 "name": "BaseBdev4", 00:13:43.439 "uuid": "ec64fb42-c01e-45a3-be5b-81eb69a3be79", 00:13:43.439 "is_configured": true, 00:13:43.439 "data_offset": 0, 00:13:43.439 "data_size": 65536 00:13:43.439 } 00:13:43.439 ] 00:13:43.439 }' 00:13:43.439 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.439 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.698 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:43.698 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.698 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.698 [2024-12-05 20:06:45.014985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:43.698 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.698 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:43.698 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.698 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.698 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.698 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.698 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.698 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.699 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.699 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.699 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.699 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.699 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.699 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.699 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.699 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.699 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.699 "name": "Existed_Raid", 00:13:43.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.699 "strip_size_kb": 0, 00:13:43.699 "state": "configuring", 00:13:43.699 "raid_level": "raid1", 00:13:43.699 "superblock": false, 00:13:43.699 "num_base_bdevs": 4, 00:13:43.699 "num_base_bdevs_discovered": 2, 00:13:43.699 "num_base_bdevs_operational": 4, 00:13:43.699 "base_bdevs_list": [ 00:13:43.699 { 00:13:43.699 "name": "BaseBdev1", 00:13:43.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.699 "is_configured": false, 00:13:43.699 "data_offset": 0, 00:13:43.699 "data_size": 0 00:13:43.699 }, 00:13:43.699 { 00:13:43.699 "name": null, 00:13:43.699 "uuid": "05975884-35a9-48a7-8b90-3edda10dcdec", 00:13:43.699 "is_configured": false, 00:13:43.699 "data_offset": 0, 00:13:43.699 "data_size": 65536 00:13:43.699 }, 00:13:43.699 { 00:13:43.699 "name": "BaseBdev3", 00:13:43.699 "uuid": "996a5e13-1355-457f-a629-9aa534ad75fa", 00:13:43.699 "is_configured": true, 00:13:43.699 "data_offset": 0, 00:13:43.699 "data_size": 65536 00:13:43.699 }, 00:13:43.699 { 00:13:43.699 "name": "BaseBdev4", 00:13:43.699 "uuid": "ec64fb42-c01e-45a3-be5b-81eb69a3be79", 00:13:43.699 "is_configured": true, 00:13:43.699 "data_offset": 0, 00:13:43.699 "data_size": 65536 00:13:43.699 } 00:13:43.699 ] 00:13:43.699 }' 00:13:43.699 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.699 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.266 [2024-12-05 20:06:45.519504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.266 BaseBdev1 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.266 [ 00:13:44.266 { 00:13:44.266 "name": "BaseBdev1", 00:13:44.266 "aliases": [ 00:13:44.266 "656d96d8-f266-4531-801f-e255b27efe20" 00:13:44.266 ], 00:13:44.266 "product_name": "Malloc disk", 00:13:44.266 "block_size": 512, 00:13:44.266 "num_blocks": 65536, 00:13:44.266 "uuid": "656d96d8-f266-4531-801f-e255b27efe20", 00:13:44.266 "assigned_rate_limits": { 00:13:44.266 "rw_ios_per_sec": 0, 00:13:44.266 "rw_mbytes_per_sec": 0, 00:13:44.266 "r_mbytes_per_sec": 0, 00:13:44.266 "w_mbytes_per_sec": 0 00:13:44.266 }, 00:13:44.266 "claimed": true, 00:13:44.266 "claim_type": "exclusive_write", 00:13:44.266 "zoned": false, 00:13:44.266 "supported_io_types": { 00:13:44.266 "read": true, 00:13:44.266 "write": true, 00:13:44.266 "unmap": true, 00:13:44.266 "flush": true, 00:13:44.266 "reset": true, 00:13:44.266 "nvme_admin": false, 00:13:44.266 "nvme_io": false, 00:13:44.266 "nvme_io_md": false, 00:13:44.266 "write_zeroes": true, 00:13:44.266 "zcopy": true, 00:13:44.266 "get_zone_info": false, 00:13:44.266 "zone_management": false, 00:13:44.266 "zone_append": false, 00:13:44.266 "compare": false, 00:13:44.266 "compare_and_write": false, 00:13:44.266 "abort": true, 00:13:44.266 "seek_hole": false, 00:13:44.266 "seek_data": false, 00:13:44.266 "copy": true, 00:13:44.266 "nvme_iov_md": false 00:13:44.266 }, 00:13:44.266 "memory_domains": [ 00:13:44.266 { 00:13:44.266 "dma_device_id": "system", 00:13:44.266 "dma_device_type": 1 00:13:44.266 }, 00:13:44.266 { 00:13:44.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.266 "dma_device_type": 2 00:13:44.266 } 00:13:44.266 ], 00:13:44.266 "driver_specific": {} 00:13:44.266 } 00:13:44.266 ] 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.266 "name": "Existed_Raid", 00:13:44.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.266 "strip_size_kb": 0, 00:13:44.266 "state": "configuring", 00:13:44.266 "raid_level": "raid1", 00:13:44.266 "superblock": false, 00:13:44.266 "num_base_bdevs": 4, 00:13:44.266 "num_base_bdevs_discovered": 3, 00:13:44.266 "num_base_bdevs_operational": 4, 00:13:44.266 "base_bdevs_list": [ 00:13:44.266 { 00:13:44.266 "name": "BaseBdev1", 00:13:44.266 "uuid": "656d96d8-f266-4531-801f-e255b27efe20", 00:13:44.266 "is_configured": true, 00:13:44.266 "data_offset": 0, 00:13:44.266 "data_size": 65536 00:13:44.266 }, 00:13:44.266 { 00:13:44.266 "name": null, 00:13:44.266 "uuid": "05975884-35a9-48a7-8b90-3edda10dcdec", 00:13:44.266 "is_configured": false, 00:13:44.266 "data_offset": 0, 00:13:44.266 "data_size": 65536 00:13:44.266 }, 00:13:44.266 { 00:13:44.266 "name": "BaseBdev3", 00:13:44.266 "uuid": "996a5e13-1355-457f-a629-9aa534ad75fa", 00:13:44.266 "is_configured": true, 00:13:44.266 "data_offset": 0, 00:13:44.266 "data_size": 65536 00:13:44.266 }, 00:13:44.266 { 00:13:44.266 "name": "BaseBdev4", 00:13:44.266 "uuid": "ec64fb42-c01e-45a3-be5b-81eb69a3be79", 00:13:44.266 "is_configured": true, 00:13:44.266 "data_offset": 0, 00:13:44.266 "data_size": 65536 00:13:44.266 } 00:13:44.266 ] 00:13:44.266 }' 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.266 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.833 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.833 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.833 20:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.833 20:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.833 [2024-12-05 20:06:46.046687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.833 "name": "Existed_Raid", 00:13:44.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.833 "strip_size_kb": 0, 00:13:44.833 "state": "configuring", 00:13:44.833 "raid_level": "raid1", 00:13:44.833 "superblock": false, 00:13:44.833 "num_base_bdevs": 4, 00:13:44.833 "num_base_bdevs_discovered": 2, 00:13:44.833 "num_base_bdevs_operational": 4, 00:13:44.833 "base_bdevs_list": [ 00:13:44.833 { 00:13:44.833 "name": "BaseBdev1", 00:13:44.833 "uuid": "656d96d8-f266-4531-801f-e255b27efe20", 00:13:44.833 "is_configured": true, 00:13:44.833 "data_offset": 0, 00:13:44.833 "data_size": 65536 00:13:44.833 }, 00:13:44.833 { 00:13:44.833 "name": null, 00:13:44.833 "uuid": "05975884-35a9-48a7-8b90-3edda10dcdec", 00:13:44.833 "is_configured": false, 00:13:44.833 "data_offset": 0, 00:13:44.833 "data_size": 65536 00:13:44.833 }, 00:13:44.833 { 00:13:44.833 "name": null, 00:13:44.833 "uuid": "996a5e13-1355-457f-a629-9aa534ad75fa", 00:13:44.833 "is_configured": false, 00:13:44.833 "data_offset": 0, 00:13:44.833 "data_size": 65536 00:13:44.833 }, 00:13:44.833 { 00:13:44.833 "name": "BaseBdev4", 00:13:44.833 "uuid": "ec64fb42-c01e-45a3-be5b-81eb69a3be79", 00:13:44.833 "is_configured": true, 00:13:44.833 "data_offset": 0, 00:13:44.833 "data_size": 65536 00:13:44.833 } 00:13:44.833 ] 00:13:44.833 }' 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.833 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.091 [2024-12-05 20:06:46.485940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.091 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.350 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.350 "name": "Existed_Raid", 00:13:45.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.350 "strip_size_kb": 0, 00:13:45.350 "state": "configuring", 00:13:45.350 "raid_level": "raid1", 00:13:45.350 "superblock": false, 00:13:45.350 "num_base_bdevs": 4, 00:13:45.350 "num_base_bdevs_discovered": 3, 00:13:45.350 "num_base_bdevs_operational": 4, 00:13:45.350 "base_bdevs_list": [ 00:13:45.350 { 00:13:45.350 "name": "BaseBdev1", 00:13:45.350 "uuid": "656d96d8-f266-4531-801f-e255b27efe20", 00:13:45.350 "is_configured": true, 00:13:45.350 "data_offset": 0, 00:13:45.350 "data_size": 65536 00:13:45.350 }, 00:13:45.350 { 00:13:45.350 "name": null, 00:13:45.350 "uuid": "05975884-35a9-48a7-8b90-3edda10dcdec", 00:13:45.350 "is_configured": false, 00:13:45.350 "data_offset": 0, 00:13:45.350 "data_size": 65536 00:13:45.350 }, 00:13:45.350 { 00:13:45.350 "name": "BaseBdev3", 00:13:45.350 "uuid": "996a5e13-1355-457f-a629-9aa534ad75fa", 00:13:45.350 "is_configured": true, 00:13:45.350 "data_offset": 0, 00:13:45.350 "data_size": 65536 00:13:45.350 }, 00:13:45.350 { 00:13:45.350 "name": "BaseBdev4", 00:13:45.350 "uuid": "ec64fb42-c01e-45a3-be5b-81eb69a3be79", 00:13:45.350 "is_configured": true, 00:13:45.350 "data_offset": 0, 00:13:45.350 "data_size": 65536 00:13:45.350 } 00:13:45.350 ] 00:13:45.350 }' 00:13:45.350 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.350 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.609 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.609 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.609 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.609 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:45.609 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.609 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:45.609 20:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:45.609 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.609 20:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.609 [2024-12-05 20:06:46.985141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:45.868 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.869 "name": "Existed_Raid", 00:13:45.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.869 "strip_size_kb": 0, 00:13:45.869 "state": "configuring", 00:13:45.869 "raid_level": "raid1", 00:13:45.869 "superblock": false, 00:13:45.869 "num_base_bdevs": 4, 00:13:45.869 "num_base_bdevs_discovered": 2, 00:13:45.869 "num_base_bdevs_operational": 4, 00:13:45.869 "base_bdevs_list": [ 00:13:45.869 { 00:13:45.869 "name": null, 00:13:45.869 "uuid": "656d96d8-f266-4531-801f-e255b27efe20", 00:13:45.869 "is_configured": false, 00:13:45.869 "data_offset": 0, 00:13:45.869 "data_size": 65536 00:13:45.869 }, 00:13:45.869 { 00:13:45.869 "name": null, 00:13:45.869 "uuid": "05975884-35a9-48a7-8b90-3edda10dcdec", 00:13:45.869 "is_configured": false, 00:13:45.869 "data_offset": 0, 00:13:45.869 "data_size": 65536 00:13:45.869 }, 00:13:45.869 { 00:13:45.869 "name": "BaseBdev3", 00:13:45.869 "uuid": "996a5e13-1355-457f-a629-9aa534ad75fa", 00:13:45.869 "is_configured": true, 00:13:45.869 "data_offset": 0, 00:13:45.869 "data_size": 65536 00:13:45.869 }, 00:13:45.869 { 00:13:45.869 "name": "BaseBdev4", 00:13:45.869 "uuid": "ec64fb42-c01e-45a3-be5b-81eb69a3be79", 00:13:45.869 "is_configured": true, 00:13:45.869 "data_offset": 0, 00:13:45.869 "data_size": 65536 00:13:45.869 } 00:13:45.869 ] 00:13:45.869 }' 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.869 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.128 [2024-12-05 20:06:47.545080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.128 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.386 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.386 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.386 "name": "Existed_Raid", 00:13:46.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.386 "strip_size_kb": 0, 00:13:46.386 "state": "configuring", 00:13:46.386 "raid_level": "raid1", 00:13:46.386 "superblock": false, 00:13:46.386 "num_base_bdevs": 4, 00:13:46.386 "num_base_bdevs_discovered": 3, 00:13:46.386 "num_base_bdevs_operational": 4, 00:13:46.386 "base_bdevs_list": [ 00:13:46.386 { 00:13:46.386 "name": null, 00:13:46.386 "uuid": "656d96d8-f266-4531-801f-e255b27efe20", 00:13:46.386 "is_configured": false, 00:13:46.386 "data_offset": 0, 00:13:46.386 "data_size": 65536 00:13:46.386 }, 00:13:46.386 { 00:13:46.386 "name": "BaseBdev2", 00:13:46.386 "uuid": "05975884-35a9-48a7-8b90-3edda10dcdec", 00:13:46.386 "is_configured": true, 00:13:46.386 "data_offset": 0, 00:13:46.386 "data_size": 65536 00:13:46.386 }, 00:13:46.386 { 00:13:46.386 "name": "BaseBdev3", 00:13:46.387 "uuid": "996a5e13-1355-457f-a629-9aa534ad75fa", 00:13:46.387 "is_configured": true, 00:13:46.387 "data_offset": 0, 00:13:46.387 "data_size": 65536 00:13:46.387 }, 00:13:46.387 { 00:13:46.387 "name": "BaseBdev4", 00:13:46.387 "uuid": "ec64fb42-c01e-45a3-be5b-81eb69a3be79", 00:13:46.387 "is_configured": true, 00:13:46.387 "data_offset": 0, 00:13:46.387 "data_size": 65536 00:13:46.387 } 00:13:46.387 ] 00:13:46.387 }' 00:13:46.387 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.387 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.644 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:46.644 20:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.644 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.644 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.644 20:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.644 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:46.644 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:46.644 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.644 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.644 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.644 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.644 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 656d96d8-f266-4531-801f-e255b27efe20 00:13:46.644 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.644 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.644 [2024-12-05 20:06:48.077974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:46.644 [2024-12-05 20:06:48.078115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:46.644 [2024-12-05 20:06:48.078149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:46.644 [2024-12-05 20:06:48.078481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:46.644 [2024-12-05 20:06:48.078710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:46.644 [2024-12-05 20:06:48.078761] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:46.644 [2024-12-05 20:06:48.079098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.644 NewBaseBdev 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.903 [ 00:13:46.903 { 00:13:46.903 "name": "NewBaseBdev", 00:13:46.903 "aliases": [ 00:13:46.903 "656d96d8-f266-4531-801f-e255b27efe20" 00:13:46.903 ], 00:13:46.903 "product_name": "Malloc disk", 00:13:46.903 "block_size": 512, 00:13:46.903 "num_blocks": 65536, 00:13:46.903 "uuid": "656d96d8-f266-4531-801f-e255b27efe20", 00:13:46.903 "assigned_rate_limits": { 00:13:46.903 "rw_ios_per_sec": 0, 00:13:46.903 "rw_mbytes_per_sec": 0, 00:13:46.903 "r_mbytes_per_sec": 0, 00:13:46.903 "w_mbytes_per_sec": 0 00:13:46.903 }, 00:13:46.903 "claimed": true, 00:13:46.903 "claim_type": "exclusive_write", 00:13:46.903 "zoned": false, 00:13:46.903 "supported_io_types": { 00:13:46.903 "read": true, 00:13:46.903 "write": true, 00:13:46.903 "unmap": true, 00:13:46.903 "flush": true, 00:13:46.903 "reset": true, 00:13:46.903 "nvme_admin": false, 00:13:46.903 "nvme_io": false, 00:13:46.903 "nvme_io_md": false, 00:13:46.903 "write_zeroes": true, 00:13:46.903 "zcopy": true, 00:13:46.903 "get_zone_info": false, 00:13:46.903 "zone_management": false, 00:13:46.903 "zone_append": false, 00:13:46.903 "compare": false, 00:13:46.903 "compare_and_write": false, 00:13:46.903 "abort": true, 00:13:46.903 "seek_hole": false, 00:13:46.903 "seek_data": false, 00:13:46.903 "copy": true, 00:13:46.903 "nvme_iov_md": false 00:13:46.903 }, 00:13:46.903 "memory_domains": [ 00:13:46.903 { 00:13:46.903 "dma_device_id": "system", 00:13:46.903 "dma_device_type": 1 00:13:46.903 }, 00:13:46.903 { 00:13:46.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.903 "dma_device_type": 2 00:13:46.903 } 00:13:46.903 ], 00:13:46.903 "driver_specific": {} 00:13:46.903 } 00:13:46.903 ] 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.903 "name": "Existed_Raid", 00:13:46.903 "uuid": "081fd71a-d9ff-4b78-852a-d06d5b09cf88", 00:13:46.903 "strip_size_kb": 0, 00:13:46.903 "state": "online", 00:13:46.903 "raid_level": "raid1", 00:13:46.903 "superblock": false, 00:13:46.903 "num_base_bdevs": 4, 00:13:46.903 "num_base_bdevs_discovered": 4, 00:13:46.903 "num_base_bdevs_operational": 4, 00:13:46.903 "base_bdevs_list": [ 00:13:46.903 { 00:13:46.903 "name": "NewBaseBdev", 00:13:46.903 "uuid": "656d96d8-f266-4531-801f-e255b27efe20", 00:13:46.903 "is_configured": true, 00:13:46.903 "data_offset": 0, 00:13:46.903 "data_size": 65536 00:13:46.903 }, 00:13:46.903 { 00:13:46.903 "name": "BaseBdev2", 00:13:46.903 "uuid": "05975884-35a9-48a7-8b90-3edda10dcdec", 00:13:46.903 "is_configured": true, 00:13:46.903 "data_offset": 0, 00:13:46.903 "data_size": 65536 00:13:46.903 }, 00:13:46.903 { 00:13:46.903 "name": "BaseBdev3", 00:13:46.903 "uuid": "996a5e13-1355-457f-a629-9aa534ad75fa", 00:13:46.903 "is_configured": true, 00:13:46.903 "data_offset": 0, 00:13:46.903 "data_size": 65536 00:13:46.903 }, 00:13:46.903 { 00:13:46.903 "name": "BaseBdev4", 00:13:46.903 "uuid": "ec64fb42-c01e-45a3-be5b-81eb69a3be79", 00:13:46.903 "is_configured": true, 00:13:46.903 "data_offset": 0, 00:13:46.903 "data_size": 65536 00:13:46.903 } 00:13:46.903 ] 00:13:46.903 }' 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.903 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.163 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:47.163 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:47.163 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:47.163 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:47.163 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:47.163 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:47.163 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:47.163 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:47.163 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.163 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.163 [2024-12-05 20:06:48.509628] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.163 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.163 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:47.163 "name": "Existed_Raid", 00:13:47.163 "aliases": [ 00:13:47.163 "081fd71a-d9ff-4b78-852a-d06d5b09cf88" 00:13:47.163 ], 00:13:47.163 "product_name": "Raid Volume", 00:13:47.163 "block_size": 512, 00:13:47.163 "num_blocks": 65536, 00:13:47.163 "uuid": "081fd71a-d9ff-4b78-852a-d06d5b09cf88", 00:13:47.163 "assigned_rate_limits": { 00:13:47.163 "rw_ios_per_sec": 0, 00:13:47.163 "rw_mbytes_per_sec": 0, 00:13:47.163 "r_mbytes_per_sec": 0, 00:13:47.163 "w_mbytes_per_sec": 0 00:13:47.163 }, 00:13:47.163 "claimed": false, 00:13:47.163 "zoned": false, 00:13:47.163 "supported_io_types": { 00:13:47.163 "read": true, 00:13:47.163 "write": true, 00:13:47.163 "unmap": false, 00:13:47.163 "flush": false, 00:13:47.163 "reset": true, 00:13:47.163 "nvme_admin": false, 00:13:47.163 "nvme_io": false, 00:13:47.163 "nvme_io_md": false, 00:13:47.163 "write_zeroes": true, 00:13:47.163 "zcopy": false, 00:13:47.163 "get_zone_info": false, 00:13:47.163 "zone_management": false, 00:13:47.163 "zone_append": false, 00:13:47.163 "compare": false, 00:13:47.163 "compare_and_write": false, 00:13:47.163 "abort": false, 00:13:47.163 "seek_hole": false, 00:13:47.163 "seek_data": false, 00:13:47.163 "copy": false, 00:13:47.163 "nvme_iov_md": false 00:13:47.163 }, 00:13:47.163 "memory_domains": [ 00:13:47.163 { 00:13:47.163 "dma_device_id": "system", 00:13:47.163 "dma_device_type": 1 00:13:47.163 }, 00:13:47.163 { 00:13:47.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.163 "dma_device_type": 2 00:13:47.163 }, 00:13:47.164 { 00:13:47.164 "dma_device_id": "system", 00:13:47.164 "dma_device_type": 1 00:13:47.164 }, 00:13:47.164 { 00:13:47.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.164 "dma_device_type": 2 00:13:47.164 }, 00:13:47.164 { 00:13:47.164 "dma_device_id": "system", 00:13:47.164 "dma_device_type": 1 00:13:47.164 }, 00:13:47.164 { 00:13:47.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.164 "dma_device_type": 2 00:13:47.164 }, 00:13:47.164 { 00:13:47.164 "dma_device_id": "system", 00:13:47.164 "dma_device_type": 1 00:13:47.164 }, 00:13:47.164 { 00:13:47.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.164 "dma_device_type": 2 00:13:47.164 } 00:13:47.164 ], 00:13:47.164 "driver_specific": { 00:13:47.164 "raid": { 00:13:47.164 "uuid": "081fd71a-d9ff-4b78-852a-d06d5b09cf88", 00:13:47.164 "strip_size_kb": 0, 00:13:47.164 "state": "online", 00:13:47.164 "raid_level": "raid1", 00:13:47.164 "superblock": false, 00:13:47.164 "num_base_bdevs": 4, 00:13:47.164 "num_base_bdevs_discovered": 4, 00:13:47.164 "num_base_bdevs_operational": 4, 00:13:47.164 "base_bdevs_list": [ 00:13:47.164 { 00:13:47.164 "name": "NewBaseBdev", 00:13:47.164 "uuid": "656d96d8-f266-4531-801f-e255b27efe20", 00:13:47.164 "is_configured": true, 00:13:47.164 "data_offset": 0, 00:13:47.164 "data_size": 65536 00:13:47.164 }, 00:13:47.164 { 00:13:47.164 "name": "BaseBdev2", 00:13:47.164 "uuid": "05975884-35a9-48a7-8b90-3edda10dcdec", 00:13:47.164 "is_configured": true, 00:13:47.164 "data_offset": 0, 00:13:47.164 "data_size": 65536 00:13:47.164 }, 00:13:47.164 { 00:13:47.164 "name": "BaseBdev3", 00:13:47.164 "uuid": "996a5e13-1355-457f-a629-9aa534ad75fa", 00:13:47.164 "is_configured": true, 00:13:47.164 "data_offset": 0, 00:13:47.164 "data_size": 65536 00:13:47.164 }, 00:13:47.164 { 00:13:47.164 "name": "BaseBdev4", 00:13:47.164 "uuid": "ec64fb42-c01e-45a3-be5b-81eb69a3be79", 00:13:47.164 "is_configured": true, 00:13:47.164 "data_offset": 0, 00:13:47.164 "data_size": 65536 00:13:47.164 } 00:13:47.164 ] 00:13:47.164 } 00:13:47.164 } 00:13:47.164 }' 00:13:47.164 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:47.164 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:47.164 BaseBdev2 00:13:47.164 BaseBdev3 00:13:47.164 BaseBdev4' 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.425 [2024-12-05 20:06:48.780773] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:47.425 [2024-12-05 20:06:48.780801] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:47.425 [2024-12-05 20:06:48.780877] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.425 [2024-12-05 20:06:48.781182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.425 [2024-12-05 20:06:48.781197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73313 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73313 ']' 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73313 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73313 00:13:47.425 killing process with pid 73313 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73313' 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73313 00:13:47.425 [2024-12-05 20:06:48.824910] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:47.425 20:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73313 00:13:47.994 [2024-12-05 20:06:49.243699] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:49.371 20:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:49.371 ************************************ 00:13:49.371 END TEST raid_state_function_test 00:13:49.371 ************************************ 00:13:49.371 00:13:49.371 real 0m11.337s 00:13:49.371 user 0m17.859s 00:13:49.371 sys 0m2.027s 00:13:49.371 20:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.371 20:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.371 20:06:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:13:49.371 20:06:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:49.371 20:06:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.371 20:06:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:49.371 ************************************ 00:13:49.371 START TEST raid_state_function_test_sb 00:13:49.371 ************************************ 00:13:49.371 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:13:49.371 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:49.371 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:49.371 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:49.371 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73985 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73985' 00:13:49.372 Process raid pid: 73985 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73985 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73985 ']' 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.372 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.372 [2024-12-05 20:06:50.602795] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:13:49.372 [2024-12-05 20:06:50.602937] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.372 [2024-12-05 20:06:50.780178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.632 [2024-12-05 20:06:50.897435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.891 [2024-12-05 20:06:51.100066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.891 [2024-12-05 20:06:51.100123] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.151 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.151 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:50.151 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:50.151 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.151 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.151 [2024-12-05 20:06:51.468866] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:50.152 [2024-12-05 20:06:51.468952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:50.152 [2024-12-05 20:06:51.468967] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.152 [2024-12-05 20:06:51.468981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.152 [2024-12-05 20:06:51.468990] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:50.152 [2024-12-05 20:06:51.469003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:50.152 [2024-12-05 20:06:51.469012] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:50.152 [2024-12-05 20:06:51.469025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.152 "name": "Existed_Raid", 00:13:50.152 "uuid": "b1ae8f09-d512-4554-9280-084d56226533", 00:13:50.152 "strip_size_kb": 0, 00:13:50.152 "state": "configuring", 00:13:50.152 "raid_level": "raid1", 00:13:50.152 "superblock": true, 00:13:50.152 "num_base_bdevs": 4, 00:13:50.152 "num_base_bdevs_discovered": 0, 00:13:50.152 "num_base_bdevs_operational": 4, 00:13:50.152 "base_bdevs_list": [ 00:13:50.152 { 00:13:50.152 "name": "BaseBdev1", 00:13:50.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.152 "is_configured": false, 00:13:50.152 "data_offset": 0, 00:13:50.152 "data_size": 0 00:13:50.152 }, 00:13:50.152 { 00:13:50.152 "name": "BaseBdev2", 00:13:50.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.152 "is_configured": false, 00:13:50.152 "data_offset": 0, 00:13:50.152 "data_size": 0 00:13:50.152 }, 00:13:50.152 { 00:13:50.152 "name": "BaseBdev3", 00:13:50.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.152 "is_configured": false, 00:13:50.152 "data_offset": 0, 00:13:50.152 "data_size": 0 00:13:50.152 }, 00:13:50.152 { 00:13:50.152 "name": "BaseBdev4", 00:13:50.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.152 "is_configured": false, 00:13:50.152 "data_offset": 0, 00:13:50.152 "data_size": 0 00:13:50.152 } 00:13:50.152 ] 00:13:50.152 }' 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.152 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.725 [2024-12-05 20:06:51.916035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.725 [2024-12-05 20:06:51.916137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.725 [2024-12-05 20:06:51.924053] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:50.725 [2024-12-05 20:06:51.924155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:50.725 [2024-12-05 20:06:51.924170] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.725 [2024-12-05 20:06:51.924182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.725 [2024-12-05 20:06:51.924189] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:50.725 [2024-12-05 20:06:51.924198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:50.725 [2024-12-05 20:06:51.924205] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:50.725 [2024-12-05 20:06:51.924215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.725 [2024-12-05 20:06:51.974146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.725 BaseBdev1 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.725 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.725 [ 00:13:50.725 { 00:13:50.725 "name": "BaseBdev1", 00:13:50.726 "aliases": [ 00:13:50.726 "039ccf42-0598-4d9e-8709-e6dee63c3d1d" 00:13:50.726 ], 00:13:50.726 "product_name": "Malloc disk", 00:13:50.726 "block_size": 512, 00:13:50.726 "num_blocks": 65536, 00:13:50.726 "uuid": "039ccf42-0598-4d9e-8709-e6dee63c3d1d", 00:13:50.726 "assigned_rate_limits": { 00:13:50.726 "rw_ios_per_sec": 0, 00:13:50.726 "rw_mbytes_per_sec": 0, 00:13:50.726 "r_mbytes_per_sec": 0, 00:13:50.726 "w_mbytes_per_sec": 0 00:13:50.726 }, 00:13:50.726 "claimed": true, 00:13:50.726 "claim_type": "exclusive_write", 00:13:50.726 "zoned": false, 00:13:50.726 "supported_io_types": { 00:13:50.726 "read": true, 00:13:50.726 "write": true, 00:13:50.726 "unmap": true, 00:13:50.726 "flush": true, 00:13:50.726 "reset": true, 00:13:50.726 "nvme_admin": false, 00:13:50.726 "nvme_io": false, 00:13:50.726 "nvme_io_md": false, 00:13:50.726 "write_zeroes": true, 00:13:50.726 "zcopy": true, 00:13:50.726 "get_zone_info": false, 00:13:50.726 "zone_management": false, 00:13:50.726 "zone_append": false, 00:13:50.726 "compare": false, 00:13:50.726 "compare_and_write": false, 00:13:50.726 "abort": true, 00:13:50.726 "seek_hole": false, 00:13:50.726 "seek_data": false, 00:13:50.726 "copy": true, 00:13:50.726 "nvme_iov_md": false 00:13:50.726 }, 00:13:50.726 "memory_domains": [ 00:13:50.726 { 00:13:50.726 "dma_device_id": "system", 00:13:50.726 "dma_device_type": 1 00:13:50.726 }, 00:13:50.726 { 00:13:50.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.726 "dma_device_type": 2 00:13:50.726 } 00:13:50.726 ], 00:13:50.726 "driver_specific": {} 00:13:50.726 } 00:13:50.726 ] 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.726 "name": "Existed_Raid", 00:13:50.726 "uuid": "e3314fd8-046a-4ca0-b74d-bb14ef5f57e2", 00:13:50.726 "strip_size_kb": 0, 00:13:50.726 "state": "configuring", 00:13:50.726 "raid_level": "raid1", 00:13:50.726 "superblock": true, 00:13:50.726 "num_base_bdevs": 4, 00:13:50.726 "num_base_bdevs_discovered": 1, 00:13:50.726 "num_base_bdevs_operational": 4, 00:13:50.726 "base_bdevs_list": [ 00:13:50.726 { 00:13:50.726 "name": "BaseBdev1", 00:13:50.726 "uuid": "039ccf42-0598-4d9e-8709-e6dee63c3d1d", 00:13:50.726 "is_configured": true, 00:13:50.726 "data_offset": 2048, 00:13:50.726 "data_size": 63488 00:13:50.726 }, 00:13:50.726 { 00:13:50.726 "name": "BaseBdev2", 00:13:50.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.726 "is_configured": false, 00:13:50.726 "data_offset": 0, 00:13:50.726 "data_size": 0 00:13:50.726 }, 00:13:50.726 { 00:13:50.726 "name": "BaseBdev3", 00:13:50.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.726 "is_configured": false, 00:13:50.726 "data_offset": 0, 00:13:50.726 "data_size": 0 00:13:50.726 }, 00:13:50.726 { 00:13:50.726 "name": "BaseBdev4", 00:13:50.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.726 "is_configured": false, 00:13:50.726 "data_offset": 0, 00:13:50.726 "data_size": 0 00:13:50.726 } 00:13:50.726 ] 00:13:50.726 }' 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.726 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.297 [2024-12-05 20:06:52.445394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:51.297 [2024-12-05 20:06:52.445530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.297 [2024-12-05 20:06:52.457416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.297 [2024-12-05 20:06:52.459225] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:51.297 [2024-12-05 20:06:52.459330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:51.297 [2024-12-05 20:06:52.459347] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:51.297 [2024-12-05 20:06:52.459360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:51.297 [2024-12-05 20:06:52.459368] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:51.297 [2024-12-05 20:06:52.459378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.297 "name": "Existed_Raid", 00:13:51.297 "uuid": "50d6bddf-ba68-4922-900b-2cf1abde6cf8", 00:13:51.297 "strip_size_kb": 0, 00:13:51.297 "state": "configuring", 00:13:51.297 "raid_level": "raid1", 00:13:51.297 "superblock": true, 00:13:51.297 "num_base_bdevs": 4, 00:13:51.297 "num_base_bdevs_discovered": 1, 00:13:51.297 "num_base_bdevs_operational": 4, 00:13:51.297 "base_bdevs_list": [ 00:13:51.297 { 00:13:51.297 "name": "BaseBdev1", 00:13:51.297 "uuid": "039ccf42-0598-4d9e-8709-e6dee63c3d1d", 00:13:51.297 "is_configured": true, 00:13:51.297 "data_offset": 2048, 00:13:51.297 "data_size": 63488 00:13:51.297 }, 00:13:51.297 { 00:13:51.297 "name": "BaseBdev2", 00:13:51.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.297 "is_configured": false, 00:13:51.297 "data_offset": 0, 00:13:51.297 "data_size": 0 00:13:51.297 }, 00:13:51.297 { 00:13:51.297 "name": "BaseBdev3", 00:13:51.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.297 "is_configured": false, 00:13:51.297 "data_offset": 0, 00:13:51.297 "data_size": 0 00:13:51.297 }, 00:13:51.297 { 00:13:51.297 "name": "BaseBdev4", 00:13:51.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.297 "is_configured": false, 00:13:51.297 "data_offset": 0, 00:13:51.297 "data_size": 0 00:13:51.297 } 00:13:51.297 ] 00:13:51.297 }' 00:13:51.297 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.298 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.558 [2024-12-05 20:06:52.938657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.558 BaseBdev2 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.558 [ 00:13:51.558 { 00:13:51.558 "name": "BaseBdev2", 00:13:51.558 "aliases": [ 00:13:51.558 "fffc2273-bdfa-4618-9292-0316e9873937" 00:13:51.558 ], 00:13:51.558 "product_name": "Malloc disk", 00:13:51.558 "block_size": 512, 00:13:51.558 "num_blocks": 65536, 00:13:51.558 "uuid": "fffc2273-bdfa-4618-9292-0316e9873937", 00:13:51.558 "assigned_rate_limits": { 00:13:51.558 "rw_ios_per_sec": 0, 00:13:51.558 "rw_mbytes_per_sec": 0, 00:13:51.558 "r_mbytes_per_sec": 0, 00:13:51.558 "w_mbytes_per_sec": 0 00:13:51.558 }, 00:13:51.558 "claimed": true, 00:13:51.558 "claim_type": "exclusive_write", 00:13:51.558 "zoned": false, 00:13:51.558 "supported_io_types": { 00:13:51.558 "read": true, 00:13:51.558 "write": true, 00:13:51.558 "unmap": true, 00:13:51.558 "flush": true, 00:13:51.558 "reset": true, 00:13:51.558 "nvme_admin": false, 00:13:51.558 "nvme_io": false, 00:13:51.558 "nvme_io_md": false, 00:13:51.558 "write_zeroes": true, 00:13:51.558 "zcopy": true, 00:13:51.558 "get_zone_info": false, 00:13:51.558 "zone_management": false, 00:13:51.558 "zone_append": false, 00:13:51.558 "compare": false, 00:13:51.558 "compare_and_write": false, 00:13:51.558 "abort": true, 00:13:51.558 "seek_hole": false, 00:13:51.558 "seek_data": false, 00:13:51.558 "copy": true, 00:13:51.558 "nvme_iov_md": false 00:13:51.558 }, 00:13:51.558 "memory_domains": [ 00:13:51.558 { 00:13:51.558 "dma_device_id": "system", 00:13:51.558 "dma_device_type": 1 00:13:51.558 }, 00:13:51.558 { 00:13:51.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.558 "dma_device_type": 2 00:13:51.558 } 00:13:51.558 ], 00:13:51.558 "driver_specific": {} 00:13:51.558 } 00:13:51.558 ] 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.558 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.818 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.818 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.818 "name": "Existed_Raid", 00:13:51.818 "uuid": "50d6bddf-ba68-4922-900b-2cf1abde6cf8", 00:13:51.818 "strip_size_kb": 0, 00:13:51.818 "state": "configuring", 00:13:51.818 "raid_level": "raid1", 00:13:51.818 "superblock": true, 00:13:51.818 "num_base_bdevs": 4, 00:13:51.818 "num_base_bdevs_discovered": 2, 00:13:51.818 "num_base_bdevs_operational": 4, 00:13:51.818 "base_bdevs_list": [ 00:13:51.818 { 00:13:51.818 "name": "BaseBdev1", 00:13:51.818 "uuid": "039ccf42-0598-4d9e-8709-e6dee63c3d1d", 00:13:51.818 "is_configured": true, 00:13:51.818 "data_offset": 2048, 00:13:51.818 "data_size": 63488 00:13:51.818 }, 00:13:51.818 { 00:13:51.818 "name": "BaseBdev2", 00:13:51.818 "uuid": "fffc2273-bdfa-4618-9292-0316e9873937", 00:13:51.818 "is_configured": true, 00:13:51.818 "data_offset": 2048, 00:13:51.818 "data_size": 63488 00:13:51.818 }, 00:13:51.818 { 00:13:51.818 "name": "BaseBdev3", 00:13:51.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.818 "is_configured": false, 00:13:51.818 "data_offset": 0, 00:13:51.818 "data_size": 0 00:13:51.818 }, 00:13:51.818 { 00:13:51.818 "name": "BaseBdev4", 00:13:51.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.818 "is_configured": false, 00:13:51.818 "data_offset": 0, 00:13:51.818 "data_size": 0 00:13:51.818 } 00:13:51.818 ] 00:13:51.818 }' 00:13:51.818 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.818 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.077 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:52.077 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.077 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.077 [2024-12-05 20:06:53.422121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.077 BaseBdev3 00:13:52.077 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.078 [ 00:13:52.078 { 00:13:52.078 "name": "BaseBdev3", 00:13:52.078 "aliases": [ 00:13:52.078 "afe6367f-ad9f-4c51-bf0d-3cce6d40af54" 00:13:52.078 ], 00:13:52.078 "product_name": "Malloc disk", 00:13:52.078 "block_size": 512, 00:13:52.078 "num_blocks": 65536, 00:13:52.078 "uuid": "afe6367f-ad9f-4c51-bf0d-3cce6d40af54", 00:13:52.078 "assigned_rate_limits": { 00:13:52.078 "rw_ios_per_sec": 0, 00:13:52.078 "rw_mbytes_per_sec": 0, 00:13:52.078 "r_mbytes_per_sec": 0, 00:13:52.078 "w_mbytes_per_sec": 0 00:13:52.078 }, 00:13:52.078 "claimed": true, 00:13:52.078 "claim_type": "exclusive_write", 00:13:52.078 "zoned": false, 00:13:52.078 "supported_io_types": { 00:13:52.078 "read": true, 00:13:52.078 "write": true, 00:13:52.078 "unmap": true, 00:13:52.078 "flush": true, 00:13:52.078 "reset": true, 00:13:52.078 "nvme_admin": false, 00:13:52.078 "nvme_io": false, 00:13:52.078 "nvme_io_md": false, 00:13:52.078 "write_zeroes": true, 00:13:52.078 "zcopy": true, 00:13:52.078 "get_zone_info": false, 00:13:52.078 "zone_management": false, 00:13:52.078 "zone_append": false, 00:13:52.078 "compare": false, 00:13:52.078 "compare_and_write": false, 00:13:52.078 "abort": true, 00:13:52.078 "seek_hole": false, 00:13:52.078 "seek_data": false, 00:13:52.078 "copy": true, 00:13:52.078 "nvme_iov_md": false 00:13:52.078 }, 00:13:52.078 "memory_domains": [ 00:13:52.078 { 00:13:52.078 "dma_device_id": "system", 00:13:52.078 "dma_device_type": 1 00:13:52.078 }, 00:13:52.078 { 00:13:52.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.078 "dma_device_type": 2 00:13:52.078 } 00:13:52.078 ], 00:13:52.078 "driver_specific": {} 00:13:52.078 } 00:13:52.078 ] 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.078 "name": "Existed_Raid", 00:13:52.078 "uuid": "50d6bddf-ba68-4922-900b-2cf1abde6cf8", 00:13:52.078 "strip_size_kb": 0, 00:13:52.078 "state": "configuring", 00:13:52.078 "raid_level": "raid1", 00:13:52.078 "superblock": true, 00:13:52.078 "num_base_bdevs": 4, 00:13:52.078 "num_base_bdevs_discovered": 3, 00:13:52.078 "num_base_bdevs_operational": 4, 00:13:52.078 "base_bdevs_list": [ 00:13:52.078 { 00:13:52.078 "name": "BaseBdev1", 00:13:52.078 "uuid": "039ccf42-0598-4d9e-8709-e6dee63c3d1d", 00:13:52.078 "is_configured": true, 00:13:52.078 "data_offset": 2048, 00:13:52.078 "data_size": 63488 00:13:52.078 }, 00:13:52.078 { 00:13:52.078 "name": "BaseBdev2", 00:13:52.078 "uuid": "fffc2273-bdfa-4618-9292-0316e9873937", 00:13:52.078 "is_configured": true, 00:13:52.078 "data_offset": 2048, 00:13:52.078 "data_size": 63488 00:13:52.078 }, 00:13:52.078 { 00:13:52.078 "name": "BaseBdev3", 00:13:52.078 "uuid": "afe6367f-ad9f-4c51-bf0d-3cce6d40af54", 00:13:52.078 "is_configured": true, 00:13:52.078 "data_offset": 2048, 00:13:52.078 "data_size": 63488 00:13:52.078 }, 00:13:52.078 { 00:13:52.078 "name": "BaseBdev4", 00:13:52.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.078 "is_configured": false, 00:13:52.078 "data_offset": 0, 00:13:52.078 "data_size": 0 00:13:52.078 } 00:13:52.078 ] 00:13:52.078 }' 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.078 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.648 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:52.648 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.648 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.648 [2024-12-05 20:06:53.942605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:52.648 [2024-12-05 20:06:53.942971] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:52.648 [2024-12-05 20:06:53.942992] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:52.648 [2024-12-05 20:06:53.943273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:52.648 [2024-12-05 20:06:53.943455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:52.648 [2024-12-05 20:06:53.943468] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:52.648 BaseBdev4 00:13:52.648 [2024-12-05 20:06:53.943631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.648 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.648 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:52.648 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:52.648 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.648 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:52.648 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.648 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.648 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.648 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.648 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.648 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.648 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:52.648 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.648 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.648 [ 00:13:52.649 { 00:13:52.649 "name": "BaseBdev4", 00:13:52.649 "aliases": [ 00:13:52.649 "ed447a9d-4b95-42fc-88d0-3dd0f7fa772a" 00:13:52.649 ], 00:13:52.649 "product_name": "Malloc disk", 00:13:52.649 "block_size": 512, 00:13:52.649 "num_blocks": 65536, 00:13:52.649 "uuid": "ed447a9d-4b95-42fc-88d0-3dd0f7fa772a", 00:13:52.649 "assigned_rate_limits": { 00:13:52.649 "rw_ios_per_sec": 0, 00:13:52.649 "rw_mbytes_per_sec": 0, 00:13:52.649 "r_mbytes_per_sec": 0, 00:13:52.649 "w_mbytes_per_sec": 0 00:13:52.649 }, 00:13:52.649 "claimed": true, 00:13:52.649 "claim_type": "exclusive_write", 00:13:52.649 "zoned": false, 00:13:52.649 "supported_io_types": { 00:13:52.649 "read": true, 00:13:52.649 "write": true, 00:13:52.649 "unmap": true, 00:13:52.649 "flush": true, 00:13:52.649 "reset": true, 00:13:52.649 "nvme_admin": false, 00:13:52.649 "nvme_io": false, 00:13:52.649 "nvme_io_md": false, 00:13:52.649 "write_zeroes": true, 00:13:52.649 "zcopy": true, 00:13:52.649 "get_zone_info": false, 00:13:52.649 "zone_management": false, 00:13:52.649 "zone_append": false, 00:13:52.649 "compare": false, 00:13:52.649 "compare_and_write": false, 00:13:52.649 "abort": true, 00:13:52.649 "seek_hole": false, 00:13:52.649 "seek_data": false, 00:13:52.649 "copy": true, 00:13:52.649 "nvme_iov_md": false 00:13:52.649 }, 00:13:52.649 "memory_domains": [ 00:13:52.649 { 00:13:52.649 "dma_device_id": "system", 00:13:52.649 "dma_device_type": 1 00:13:52.649 }, 00:13:52.649 { 00:13:52.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.649 "dma_device_type": 2 00:13:52.649 } 00:13:52.649 ], 00:13:52.649 "driver_specific": {} 00:13:52.649 } 00:13:52.649 ] 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.649 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.649 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.649 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.649 "name": "Existed_Raid", 00:13:52.649 "uuid": "50d6bddf-ba68-4922-900b-2cf1abde6cf8", 00:13:52.649 "strip_size_kb": 0, 00:13:52.649 "state": "online", 00:13:52.649 "raid_level": "raid1", 00:13:52.649 "superblock": true, 00:13:52.649 "num_base_bdevs": 4, 00:13:52.649 "num_base_bdevs_discovered": 4, 00:13:52.649 "num_base_bdevs_operational": 4, 00:13:52.649 "base_bdevs_list": [ 00:13:52.649 { 00:13:52.649 "name": "BaseBdev1", 00:13:52.649 "uuid": "039ccf42-0598-4d9e-8709-e6dee63c3d1d", 00:13:52.649 "is_configured": true, 00:13:52.649 "data_offset": 2048, 00:13:52.649 "data_size": 63488 00:13:52.649 }, 00:13:52.649 { 00:13:52.649 "name": "BaseBdev2", 00:13:52.649 "uuid": "fffc2273-bdfa-4618-9292-0316e9873937", 00:13:52.649 "is_configured": true, 00:13:52.649 "data_offset": 2048, 00:13:52.649 "data_size": 63488 00:13:52.649 }, 00:13:52.649 { 00:13:52.649 "name": "BaseBdev3", 00:13:52.649 "uuid": "afe6367f-ad9f-4c51-bf0d-3cce6d40af54", 00:13:52.649 "is_configured": true, 00:13:52.649 "data_offset": 2048, 00:13:52.649 "data_size": 63488 00:13:52.649 }, 00:13:52.649 { 00:13:52.649 "name": "BaseBdev4", 00:13:52.649 "uuid": "ed447a9d-4b95-42fc-88d0-3dd0f7fa772a", 00:13:52.649 "is_configured": true, 00:13:52.649 "data_offset": 2048, 00:13:52.649 "data_size": 63488 00:13:52.649 } 00:13:52.649 ] 00:13:52.649 }' 00:13:52.649 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.649 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.219 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:53.219 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:53.219 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:53.219 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:53.219 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:53.219 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:53.219 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:53.219 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.219 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.219 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:53.219 [2024-12-05 20:06:54.414217] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.219 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.219 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:53.219 "name": "Existed_Raid", 00:13:53.219 "aliases": [ 00:13:53.219 "50d6bddf-ba68-4922-900b-2cf1abde6cf8" 00:13:53.219 ], 00:13:53.219 "product_name": "Raid Volume", 00:13:53.219 "block_size": 512, 00:13:53.219 "num_blocks": 63488, 00:13:53.220 "uuid": "50d6bddf-ba68-4922-900b-2cf1abde6cf8", 00:13:53.220 "assigned_rate_limits": { 00:13:53.220 "rw_ios_per_sec": 0, 00:13:53.220 "rw_mbytes_per_sec": 0, 00:13:53.220 "r_mbytes_per_sec": 0, 00:13:53.220 "w_mbytes_per_sec": 0 00:13:53.220 }, 00:13:53.220 "claimed": false, 00:13:53.220 "zoned": false, 00:13:53.220 "supported_io_types": { 00:13:53.220 "read": true, 00:13:53.220 "write": true, 00:13:53.220 "unmap": false, 00:13:53.220 "flush": false, 00:13:53.220 "reset": true, 00:13:53.220 "nvme_admin": false, 00:13:53.220 "nvme_io": false, 00:13:53.220 "nvme_io_md": false, 00:13:53.220 "write_zeroes": true, 00:13:53.220 "zcopy": false, 00:13:53.220 "get_zone_info": false, 00:13:53.220 "zone_management": false, 00:13:53.220 "zone_append": false, 00:13:53.220 "compare": false, 00:13:53.220 "compare_and_write": false, 00:13:53.220 "abort": false, 00:13:53.220 "seek_hole": false, 00:13:53.220 "seek_data": false, 00:13:53.220 "copy": false, 00:13:53.220 "nvme_iov_md": false 00:13:53.220 }, 00:13:53.220 "memory_domains": [ 00:13:53.220 { 00:13:53.220 "dma_device_id": "system", 00:13:53.220 "dma_device_type": 1 00:13:53.220 }, 00:13:53.220 { 00:13:53.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.220 "dma_device_type": 2 00:13:53.220 }, 00:13:53.220 { 00:13:53.220 "dma_device_id": "system", 00:13:53.220 "dma_device_type": 1 00:13:53.220 }, 00:13:53.220 { 00:13:53.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.220 "dma_device_type": 2 00:13:53.220 }, 00:13:53.220 { 00:13:53.220 "dma_device_id": "system", 00:13:53.220 "dma_device_type": 1 00:13:53.220 }, 00:13:53.220 { 00:13:53.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.220 "dma_device_type": 2 00:13:53.220 }, 00:13:53.220 { 00:13:53.220 "dma_device_id": "system", 00:13:53.220 "dma_device_type": 1 00:13:53.220 }, 00:13:53.220 { 00:13:53.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.220 "dma_device_type": 2 00:13:53.220 } 00:13:53.220 ], 00:13:53.220 "driver_specific": { 00:13:53.220 "raid": { 00:13:53.220 "uuid": "50d6bddf-ba68-4922-900b-2cf1abde6cf8", 00:13:53.220 "strip_size_kb": 0, 00:13:53.220 "state": "online", 00:13:53.220 "raid_level": "raid1", 00:13:53.220 "superblock": true, 00:13:53.220 "num_base_bdevs": 4, 00:13:53.220 "num_base_bdevs_discovered": 4, 00:13:53.220 "num_base_bdevs_operational": 4, 00:13:53.220 "base_bdevs_list": [ 00:13:53.220 { 00:13:53.220 "name": "BaseBdev1", 00:13:53.220 "uuid": "039ccf42-0598-4d9e-8709-e6dee63c3d1d", 00:13:53.220 "is_configured": true, 00:13:53.220 "data_offset": 2048, 00:13:53.220 "data_size": 63488 00:13:53.220 }, 00:13:53.220 { 00:13:53.220 "name": "BaseBdev2", 00:13:53.220 "uuid": "fffc2273-bdfa-4618-9292-0316e9873937", 00:13:53.220 "is_configured": true, 00:13:53.220 "data_offset": 2048, 00:13:53.220 "data_size": 63488 00:13:53.220 }, 00:13:53.220 { 00:13:53.220 "name": "BaseBdev3", 00:13:53.220 "uuid": "afe6367f-ad9f-4c51-bf0d-3cce6d40af54", 00:13:53.220 "is_configured": true, 00:13:53.220 "data_offset": 2048, 00:13:53.220 "data_size": 63488 00:13:53.220 }, 00:13:53.220 { 00:13:53.220 "name": "BaseBdev4", 00:13:53.220 "uuid": "ed447a9d-4b95-42fc-88d0-3dd0f7fa772a", 00:13:53.220 "is_configured": true, 00:13:53.220 "data_offset": 2048, 00:13:53.220 "data_size": 63488 00:13:53.220 } 00:13:53.220 ] 00:13:53.220 } 00:13:53.220 } 00:13:53.220 }' 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:53.220 BaseBdev2 00:13:53.220 BaseBdev3 00:13:53.220 BaseBdev4' 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.220 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.481 [2024-12-05 20:06:54.729406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.481 "name": "Existed_Raid", 00:13:53.481 "uuid": "50d6bddf-ba68-4922-900b-2cf1abde6cf8", 00:13:53.481 "strip_size_kb": 0, 00:13:53.481 "state": "online", 00:13:53.481 "raid_level": "raid1", 00:13:53.481 "superblock": true, 00:13:53.481 "num_base_bdevs": 4, 00:13:53.481 "num_base_bdevs_discovered": 3, 00:13:53.481 "num_base_bdevs_operational": 3, 00:13:53.481 "base_bdevs_list": [ 00:13:53.481 { 00:13:53.481 "name": null, 00:13:53.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.481 "is_configured": false, 00:13:53.481 "data_offset": 0, 00:13:53.481 "data_size": 63488 00:13:53.481 }, 00:13:53.481 { 00:13:53.481 "name": "BaseBdev2", 00:13:53.481 "uuid": "fffc2273-bdfa-4618-9292-0316e9873937", 00:13:53.481 "is_configured": true, 00:13:53.481 "data_offset": 2048, 00:13:53.481 "data_size": 63488 00:13:53.481 }, 00:13:53.481 { 00:13:53.481 "name": "BaseBdev3", 00:13:53.481 "uuid": "afe6367f-ad9f-4c51-bf0d-3cce6d40af54", 00:13:53.481 "is_configured": true, 00:13:53.481 "data_offset": 2048, 00:13:53.481 "data_size": 63488 00:13:53.481 }, 00:13:53.481 { 00:13:53.481 "name": "BaseBdev4", 00:13:53.481 "uuid": "ed447a9d-4b95-42fc-88d0-3dd0f7fa772a", 00:13:53.481 "is_configured": true, 00:13:53.481 "data_offset": 2048, 00:13:53.481 "data_size": 63488 00:13:53.481 } 00:13:53.481 ] 00:13:53.481 }' 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.481 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.051 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:54.051 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.051 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.051 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:54.051 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.051 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.051 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.051 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:54.052 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:54.052 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:54.052 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.052 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.052 [2024-12-05 20:06:55.266594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:54.052 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.052 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:54.052 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.052 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.052 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:54.052 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.052 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.052 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.052 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:54.052 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:54.052 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:54.052 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.052 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.052 [2024-12-05 20:06:55.420457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.311 [2024-12-05 20:06:55.574107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:54.311 [2024-12-05 20:06:55.574267] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:54.311 [2024-12-05 20:06:55.669114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:54.311 [2024-12-05 20:06:55.669248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:54.311 [2024-12-05 20:06:55.669291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.311 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.571 BaseBdev2 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.571 [ 00:13:54.571 { 00:13:54.571 "name": "BaseBdev2", 00:13:54.571 "aliases": [ 00:13:54.571 "fc37a6a0-242d-4151-8f42-984eb357041e" 00:13:54.571 ], 00:13:54.571 "product_name": "Malloc disk", 00:13:54.571 "block_size": 512, 00:13:54.571 "num_blocks": 65536, 00:13:54.571 "uuid": "fc37a6a0-242d-4151-8f42-984eb357041e", 00:13:54.571 "assigned_rate_limits": { 00:13:54.571 "rw_ios_per_sec": 0, 00:13:54.571 "rw_mbytes_per_sec": 0, 00:13:54.571 "r_mbytes_per_sec": 0, 00:13:54.571 "w_mbytes_per_sec": 0 00:13:54.571 }, 00:13:54.571 "claimed": false, 00:13:54.571 "zoned": false, 00:13:54.571 "supported_io_types": { 00:13:54.571 "read": true, 00:13:54.571 "write": true, 00:13:54.571 "unmap": true, 00:13:54.571 "flush": true, 00:13:54.571 "reset": true, 00:13:54.571 "nvme_admin": false, 00:13:54.571 "nvme_io": false, 00:13:54.571 "nvme_io_md": false, 00:13:54.571 "write_zeroes": true, 00:13:54.571 "zcopy": true, 00:13:54.571 "get_zone_info": false, 00:13:54.571 "zone_management": false, 00:13:54.571 "zone_append": false, 00:13:54.571 "compare": false, 00:13:54.571 "compare_and_write": false, 00:13:54.571 "abort": true, 00:13:54.571 "seek_hole": false, 00:13:54.571 "seek_data": false, 00:13:54.571 "copy": true, 00:13:54.571 "nvme_iov_md": false 00:13:54.571 }, 00:13:54.571 "memory_domains": [ 00:13:54.571 { 00:13:54.571 "dma_device_id": "system", 00:13:54.571 "dma_device_type": 1 00:13:54.571 }, 00:13:54.571 { 00:13:54.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.571 "dma_device_type": 2 00:13:54.571 } 00:13:54.571 ], 00:13:54.571 "driver_specific": {} 00:13:54.571 } 00:13:54.571 ] 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.571 BaseBdev3 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.571 [ 00:13:54.571 { 00:13:54.571 "name": "BaseBdev3", 00:13:54.571 "aliases": [ 00:13:54.571 "30e44f2d-0669-4dbe-8370-6aa0c1498073" 00:13:54.571 ], 00:13:54.571 "product_name": "Malloc disk", 00:13:54.571 "block_size": 512, 00:13:54.571 "num_blocks": 65536, 00:13:54.571 "uuid": "30e44f2d-0669-4dbe-8370-6aa0c1498073", 00:13:54.571 "assigned_rate_limits": { 00:13:54.571 "rw_ios_per_sec": 0, 00:13:54.571 "rw_mbytes_per_sec": 0, 00:13:54.571 "r_mbytes_per_sec": 0, 00:13:54.571 "w_mbytes_per_sec": 0 00:13:54.571 }, 00:13:54.571 "claimed": false, 00:13:54.571 "zoned": false, 00:13:54.571 "supported_io_types": { 00:13:54.571 "read": true, 00:13:54.571 "write": true, 00:13:54.571 "unmap": true, 00:13:54.571 "flush": true, 00:13:54.571 "reset": true, 00:13:54.571 "nvme_admin": false, 00:13:54.571 "nvme_io": false, 00:13:54.571 "nvme_io_md": false, 00:13:54.571 "write_zeroes": true, 00:13:54.571 "zcopy": true, 00:13:54.571 "get_zone_info": false, 00:13:54.571 "zone_management": false, 00:13:54.571 "zone_append": false, 00:13:54.571 "compare": false, 00:13:54.571 "compare_and_write": false, 00:13:54.571 "abort": true, 00:13:54.571 "seek_hole": false, 00:13:54.571 "seek_data": false, 00:13:54.571 "copy": true, 00:13:54.571 "nvme_iov_md": false 00:13:54.571 }, 00:13:54.571 "memory_domains": [ 00:13:54.571 { 00:13:54.571 "dma_device_id": "system", 00:13:54.571 "dma_device_type": 1 00:13:54.571 }, 00:13:54.571 { 00:13:54.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.571 "dma_device_type": 2 00:13:54.571 } 00:13:54.571 ], 00:13:54.571 "driver_specific": {} 00:13:54.571 } 00:13:54.571 ] 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.571 BaseBdev4 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:54.571 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.572 [ 00:13:54.572 { 00:13:54.572 "name": "BaseBdev4", 00:13:54.572 "aliases": [ 00:13:54.572 "cd65dd79-c570-4ce2-87d0-3bb0721ccf63" 00:13:54.572 ], 00:13:54.572 "product_name": "Malloc disk", 00:13:54.572 "block_size": 512, 00:13:54.572 "num_blocks": 65536, 00:13:54.572 "uuid": "cd65dd79-c570-4ce2-87d0-3bb0721ccf63", 00:13:54.572 "assigned_rate_limits": { 00:13:54.572 "rw_ios_per_sec": 0, 00:13:54.572 "rw_mbytes_per_sec": 0, 00:13:54.572 "r_mbytes_per_sec": 0, 00:13:54.572 "w_mbytes_per_sec": 0 00:13:54.572 }, 00:13:54.572 "claimed": false, 00:13:54.572 "zoned": false, 00:13:54.572 "supported_io_types": { 00:13:54.572 "read": true, 00:13:54.572 "write": true, 00:13:54.572 "unmap": true, 00:13:54.572 "flush": true, 00:13:54.572 "reset": true, 00:13:54.572 "nvme_admin": false, 00:13:54.572 "nvme_io": false, 00:13:54.572 "nvme_io_md": false, 00:13:54.572 "write_zeroes": true, 00:13:54.572 "zcopy": true, 00:13:54.572 "get_zone_info": false, 00:13:54.572 "zone_management": false, 00:13:54.572 "zone_append": false, 00:13:54.572 "compare": false, 00:13:54.572 "compare_and_write": false, 00:13:54.572 "abort": true, 00:13:54.572 "seek_hole": false, 00:13:54.572 "seek_data": false, 00:13:54.572 "copy": true, 00:13:54.572 "nvme_iov_md": false 00:13:54.572 }, 00:13:54.572 "memory_domains": [ 00:13:54.572 { 00:13:54.572 "dma_device_id": "system", 00:13:54.572 "dma_device_type": 1 00:13:54.572 }, 00:13:54.572 { 00:13:54.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.572 "dma_device_type": 2 00:13:54.572 } 00:13:54.572 ], 00:13:54.572 "driver_specific": {} 00:13:54.572 } 00:13:54.572 ] 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.572 [2024-12-05 20:06:55.966595] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:54.572 [2024-12-05 20:06:55.966702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:54.572 [2024-12-05 20:06:55.966756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.572 [2024-12-05 20:06:55.968722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.572 [2024-12-05 20:06:55.968813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.572 20:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.830 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.830 "name": "Existed_Raid", 00:13:54.830 "uuid": "17ad08cc-8b11-4877-abb4-b2f40d48e31d", 00:13:54.830 "strip_size_kb": 0, 00:13:54.830 "state": "configuring", 00:13:54.830 "raid_level": "raid1", 00:13:54.830 "superblock": true, 00:13:54.830 "num_base_bdevs": 4, 00:13:54.830 "num_base_bdevs_discovered": 3, 00:13:54.830 "num_base_bdevs_operational": 4, 00:13:54.830 "base_bdevs_list": [ 00:13:54.830 { 00:13:54.830 "name": "BaseBdev1", 00:13:54.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.830 "is_configured": false, 00:13:54.830 "data_offset": 0, 00:13:54.830 "data_size": 0 00:13:54.830 }, 00:13:54.830 { 00:13:54.830 "name": "BaseBdev2", 00:13:54.830 "uuid": "fc37a6a0-242d-4151-8f42-984eb357041e", 00:13:54.830 "is_configured": true, 00:13:54.830 "data_offset": 2048, 00:13:54.830 "data_size": 63488 00:13:54.830 }, 00:13:54.830 { 00:13:54.830 "name": "BaseBdev3", 00:13:54.830 "uuid": "30e44f2d-0669-4dbe-8370-6aa0c1498073", 00:13:54.830 "is_configured": true, 00:13:54.830 "data_offset": 2048, 00:13:54.830 "data_size": 63488 00:13:54.830 }, 00:13:54.830 { 00:13:54.830 "name": "BaseBdev4", 00:13:54.830 "uuid": "cd65dd79-c570-4ce2-87d0-3bb0721ccf63", 00:13:54.830 "is_configured": true, 00:13:54.830 "data_offset": 2048, 00:13:54.830 "data_size": 63488 00:13:54.830 } 00:13:54.830 ] 00:13:54.830 }' 00:13:54.830 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.830 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.088 [2024-12-05 20:06:56.429784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.088 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.088 "name": "Existed_Raid", 00:13:55.088 "uuid": "17ad08cc-8b11-4877-abb4-b2f40d48e31d", 00:13:55.088 "strip_size_kb": 0, 00:13:55.088 "state": "configuring", 00:13:55.088 "raid_level": "raid1", 00:13:55.088 "superblock": true, 00:13:55.088 "num_base_bdevs": 4, 00:13:55.088 "num_base_bdevs_discovered": 2, 00:13:55.088 "num_base_bdevs_operational": 4, 00:13:55.088 "base_bdevs_list": [ 00:13:55.088 { 00:13:55.088 "name": "BaseBdev1", 00:13:55.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.088 "is_configured": false, 00:13:55.088 "data_offset": 0, 00:13:55.088 "data_size": 0 00:13:55.088 }, 00:13:55.088 { 00:13:55.088 "name": null, 00:13:55.088 "uuid": "fc37a6a0-242d-4151-8f42-984eb357041e", 00:13:55.088 "is_configured": false, 00:13:55.088 "data_offset": 0, 00:13:55.088 "data_size": 63488 00:13:55.089 }, 00:13:55.089 { 00:13:55.089 "name": "BaseBdev3", 00:13:55.089 "uuid": "30e44f2d-0669-4dbe-8370-6aa0c1498073", 00:13:55.089 "is_configured": true, 00:13:55.089 "data_offset": 2048, 00:13:55.089 "data_size": 63488 00:13:55.089 }, 00:13:55.089 { 00:13:55.089 "name": "BaseBdev4", 00:13:55.089 "uuid": "cd65dd79-c570-4ce2-87d0-3bb0721ccf63", 00:13:55.089 "is_configured": true, 00:13:55.089 "data_offset": 2048, 00:13:55.089 "data_size": 63488 00:13:55.089 } 00:13:55.089 ] 00:13:55.089 }' 00:13:55.089 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.089 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.656 [2024-12-05 20:06:56.929649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.656 BaseBdev1 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.656 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.656 [ 00:13:55.656 { 00:13:55.656 "name": "BaseBdev1", 00:13:55.656 "aliases": [ 00:13:55.656 "f8b8d070-a9d9-4371-af63-83de2c246f6b" 00:13:55.656 ], 00:13:55.656 "product_name": "Malloc disk", 00:13:55.656 "block_size": 512, 00:13:55.656 "num_blocks": 65536, 00:13:55.656 "uuid": "f8b8d070-a9d9-4371-af63-83de2c246f6b", 00:13:55.656 "assigned_rate_limits": { 00:13:55.656 "rw_ios_per_sec": 0, 00:13:55.656 "rw_mbytes_per_sec": 0, 00:13:55.656 "r_mbytes_per_sec": 0, 00:13:55.656 "w_mbytes_per_sec": 0 00:13:55.656 }, 00:13:55.656 "claimed": true, 00:13:55.656 "claim_type": "exclusive_write", 00:13:55.656 "zoned": false, 00:13:55.656 "supported_io_types": { 00:13:55.656 "read": true, 00:13:55.657 "write": true, 00:13:55.657 "unmap": true, 00:13:55.657 "flush": true, 00:13:55.657 "reset": true, 00:13:55.657 "nvme_admin": false, 00:13:55.657 "nvme_io": false, 00:13:55.657 "nvme_io_md": false, 00:13:55.657 "write_zeroes": true, 00:13:55.657 "zcopy": true, 00:13:55.657 "get_zone_info": false, 00:13:55.657 "zone_management": false, 00:13:55.657 "zone_append": false, 00:13:55.657 "compare": false, 00:13:55.657 "compare_and_write": false, 00:13:55.657 "abort": true, 00:13:55.657 "seek_hole": false, 00:13:55.657 "seek_data": false, 00:13:55.657 "copy": true, 00:13:55.657 "nvme_iov_md": false 00:13:55.657 }, 00:13:55.657 "memory_domains": [ 00:13:55.657 { 00:13:55.657 "dma_device_id": "system", 00:13:55.657 "dma_device_type": 1 00:13:55.657 }, 00:13:55.657 { 00:13:55.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.657 "dma_device_type": 2 00:13:55.657 } 00:13:55.657 ], 00:13:55.657 "driver_specific": {} 00:13:55.657 } 00:13:55.657 ] 00:13:55.657 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.657 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:55.657 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:55.657 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.657 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.657 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.657 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.657 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.657 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.657 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.657 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.657 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.657 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.657 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.657 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.657 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.657 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.657 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.657 "name": "Existed_Raid", 00:13:55.657 "uuid": "17ad08cc-8b11-4877-abb4-b2f40d48e31d", 00:13:55.657 "strip_size_kb": 0, 00:13:55.657 "state": "configuring", 00:13:55.657 "raid_level": "raid1", 00:13:55.657 "superblock": true, 00:13:55.657 "num_base_bdevs": 4, 00:13:55.657 "num_base_bdevs_discovered": 3, 00:13:55.657 "num_base_bdevs_operational": 4, 00:13:55.657 "base_bdevs_list": [ 00:13:55.657 { 00:13:55.657 "name": "BaseBdev1", 00:13:55.657 "uuid": "f8b8d070-a9d9-4371-af63-83de2c246f6b", 00:13:55.657 "is_configured": true, 00:13:55.657 "data_offset": 2048, 00:13:55.657 "data_size": 63488 00:13:55.657 }, 00:13:55.657 { 00:13:55.657 "name": null, 00:13:55.657 "uuid": "fc37a6a0-242d-4151-8f42-984eb357041e", 00:13:55.657 "is_configured": false, 00:13:55.657 "data_offset": 0, 00:13:55.657 "data_size": 63488 00:13:55.657 }, 00:13:55.657 { 00:13:55.657 "name": "BaseBdev3", 00:13:55.657 "uuid": "30e44f2d-0669-4dbe-8370-6aa0c1498073", 00:13:55.657 "is_configured": true, 00:13:55.657 "data_offset": 2048, 00:13:55.657 "data_size": 63488 00:13:55.657 }, 00:13:55.657 { 00:13:55.657 "name": "BaseBdev4", 00:13:55.657 "uuid": "cd65dd79-c570-4ce2-87d0-3bb0721ccf63", 00:13:55.657 "is_configured": true, 00:13:55.657 "data_offset": 2048, 00:13:55.657 "data_size": 63488 00:13:55.657 } 00:13:55.657 ] 00:13:55.657 }' 00:13:55.657 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.657 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.915 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.915 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.915 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.915 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:55.915 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.175 [2024-12-05 20:06:57.380970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.175 "name": "Existed_Raid", 00:13:56.175 "uuid": "17ad08cc-8b11-4877-abb4-b2f40d48e31d", 00:13:56.175 "strip_size_kb": 0, 00:13:56.175 "state": "configuring", 00:13:56.175 "raid_level": "raid1", 00:13:56.175 "superblock": true, 00:13:56.175 "num_base_bdevs": 4, 00:13:56.175 "num_base_bdevs_discovered": 2, 00:13:56.175 "num_base_bdevs_operational": 4, 00:13:56.175 "base_bdevs_list": [ 00:13:56.175 { 00:13:56.175 "name": "BaseBdev1", 00:13:56.175 "uuid": "f8b8d070-a9d9-4371-af63-83de2c246f6b", 00:13:56.175 "is_configured": true, 00:13:56.175 "data_offset": 2048, 00:13:56.175 "data_size": 63488 00:13:56.175 }, 00:13:56.175 { 00:13:56.175 "name": null, 00:13:56.175 "uuid": "fc37a6a0-242d-4151-8f42-984eb357041e", 00:13:56.175 "is_configured": false, 00:13:56.175 "data_offset": 0, 00:13:56.175 "data_size": 63488 00:13:56.175 }, 00:13:56.175 { 00:13:56.175 "name": null, 00:13:56.175 "uuid": "30e44f2d-0669-4dbe-8370-6aa0c1498073", 00:13:56.175 "is_configured": false, 00:13:56.175 "data_offset": 0, 00:13:56.175 "data_size": 63488 00:13:56.175 }, 00:13:56.175 { 00:13:56.175 "name": "BaseBdev4", 00:13:56.175 "uuid": "cd65dd79-c570-4ce2-87d0-3bb0721ccf63", 00:13:56.175 "is_configured": true, 00:13:56.175 "data_offset": 2048, 00:13:56.175 "data_size": 63488 00:13:56.175 } 00:13:56.175 ] 00:13:56.175 }' 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.175 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.434 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.434 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:56.434 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.434 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.434 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.434 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:56.434 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:56.434 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.434 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.434 [2024-12-05 20:06:57.868493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.693 "name": "Existed_Raid", 00:13:56.693 "uuid": "17ad08cc-8b11-4877-abb4-b2f40d48e31d", 00:13:56.693 "strip_size_kb": 0, 00:13:56.693 "state": "configuring", 00:13:56.693 "raid_level": "raid1", 00:13:56.693 "superblock": true, 00:13:56.693 "num_base_bdevs": 4, 00:13:56.693 "num_base_bdevs_discovered": 3, 00:13:56.693 "num_base_bdevs_operational": 4, 00:13:56.693 "base_bdevs_list": [ 00:13:56.693 { 00:13:56.693 "name": "BaseBdev1", 00:13:56.693 "uuid": "f8b8d070-a9d9-4371-af63-83de2c246f6b", 00:13:56.693 "is_configured": true, 00:13:56.693 "data_offset": 2048, 00:13:56.693 "data_size": 63488 00:13:56.693 }, 00:13:56.693 { 00:13:56.693 "name": null, 00:13:56.693 "uuid": "fc37a6a0-242d-4151-8f42-984eb357041e", 00:13:56.693 "is_configured": false, 00:13:56.693 "data_offset": 0, 00:13:56.693 "data_size": 63488 00:13:56.693 }, 00:13:56.693 { 00:13:56.693 "name": "BaseBdev3", 00:13:56.693 "uuid": "30e44f2d-0669-4dbe-8370-6aa0c1498073", 00:13:56.693 "is_configured": true, 00:13:56.693 "data_offset": 2048, 00:13:56.693 "data_size": 63488 00:13:56.693 }, 00:13:56.693 { 00:13:56.693 "name": "BaseBdev4", 00:13:56.693 "uuid": "cd65dd79-c570-4ce2-87d0-3bb0721ccf63", 00:13:56.693 "is_configured": true, 00:13:56.693 "data_offset": 2048, 00:13:56.693 "data_size": 63488 00:13:56.693 } 00:13:56.693 ] 00:13:56.693 }' 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.693 20:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.953 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.953 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:56.953 20:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.953 20:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.953 20:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.953 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:56.953 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:56.953 20:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.953 20:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.953 [2024-12-05 20:06:58.383937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.214 "name": "Existed_Raid", 00:13:57.214 "uuid": "17ad08cc-8b11-4877-abb4-b2f40d48e31d", 00:13:57.214 "strip_size_kb": 0, 00:13:57.214 "state": "configuring", 00:13:57.214 "raid_level": "raid1", 00:13:57.214 "superblock": true, 00:13:57.214 "num_base_bdevs": 4, 00:13:57.214 "num_base_bdevs_discovered": 2, 00:13:57.214 "num_base_bdevs_operational": 4, 00:13:57.214 "base_bdevs_list": [ 00:13:57.214 { 00:13:57.214 "name": null, 00:13:57.214 "uuid": "f8b8d070-a9d9-4371-af63-83de2c246f6b", 00:13:57.214 "is_configured": false, 00:13:57.214 "data_offset": 0, 00:13:57.214 "data_size": 63488 00:13:57.214 }, 00:13:57.214 { 00:13:57.214 "name": null, 00:13:57.214 "uuid": "fc37a6a0-242d-4151-8f42-984eb357041e", 00:13:57.214 "is_configured": false, 00:13:57.214 "data_offset": 0, 00:13:57.214 "data_size": 63488 00:13:57.214 }, 00:13:57.214 { 00:13:57.214 "name": "BaseBdev3", 00:13:57.214 "uuid": "30e44f2d-0669-4dbe-8370-6aa0c1498073", 00:13:57.214 "is_configured": true, 00:13:57.214 "data_offset": 2048, 00:13:57.214 "data_size": 63488 00:13:57.214 }, 00:13:57.214 { 00:13:57.214 "name": "BaseBdev4", 00:13:57.214 "uuid": "cd65dd79-c570-4ce2-87d0-3bb0721ccf63", 00:13:57.214 "is_configured": true, 00:13:57.214 "data_offset": 2048, 00:13:57.214 "data_size": 63488 00:13:57.214 } 00:13:57.214 ] 00:13:57.214 }' 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.214 20:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.783 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.783 20:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:57.783 20:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.783 20:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.783 20:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.783 [2024-12-05 20:06:59.024110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.783 "name": "Existed_Raid", 00:13:57.783 "uuid": "17ad08cc-8b11-4877-abb4-b2f40d48e31d", 00:13:57.783 "strip_size_kb": 0, 00:13:57.783 "state": "configuring", 00:13:57.783 "raid_level": "raid1", 00:13:57.783 "superblock": true, 00:13:57.783 "num_base_bdevs": 4, 00:13:57.783 "num_base_bdevs_discovered": 3, 00:13:57.783 "num_base_bdevs_operational": 4, 00:13:57.783 "base_bdevs_list": [ 00:13:57.783 { 00:13:57.783 "name": null, 00:13:57.783 "uuid": "f8b8d070-a9d9-4371-af63-83de2c246f6b", 00:13:57.783 "is_configured": false, 00:13:57.783 "data_offset": 0, 00:13:57.783 "data_size": 63488 00:13:57.783 }, 00:13:57.783 { 00:13:57.783 "name": "BaseBdev2", 00:13:57.783 "uuid": "fc37a6a0-242d-4151-8f42-984eb357041e", 00:13:57.783 "is_configured": true, 00:13:57.783 "data_offset": 2048, 00:13:57.783 "data_size": 63488 00:13:57.783 }, 00:13:57.783 { 00:13:57.783 "name": "BaseBdev3", 00:13:57.783 "uuid": "30e44f2d-0669-4dbe-8370-6aa0c1498073", 00:13:57.783 "is_configured": true, 00:13:57.783 "data_offset": 2048, 00:13:57.783 "data_size": 63488 00:13:57.783 }, 00:13:57.783 { 00:13:57.783 "name": "BaseBdev4", 00:13:57.783 "uuid": "cd65dd79-c570-4ce2-87d0-3bb0721ccf63", 00:13:57.783 "is_configured": true, 00:13:57.783 "data_offset": 2048, 00:13:57.783 "data_size": 63488 00:13:57.783 } 00:13:57.783 ] 00:13:57.783 }' 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.783 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f8b8d070-a9d9-4371-af63-83de2c246f6b 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.350 [2024-12-05 20:06:59.606710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:58.350 [2024-12-05 20:06:59.607126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:58.350 [2024-12-05 20:06:59.607190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:58.350 [2024-12-05 20:06:59.607504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:58.350 [2024-12-05 20:06:59.607734] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:58.350 [2024-12-05 20:06:59.607782] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:58.350 NewBaseBdev 00:13:58.350 [2024-12-05 20:06:59.608020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.350 [ 00:13:58.350 { 00:13:58.350 "name": "NewBaseBdev", 00:13:58.350 "aliases": [ 00:13:58.350 "f8b8d070-a9d9-4371-af63-83de2c246f6b" 00:13:58.350 ], 00:13:58.350 "product_name": "Malloc disk", 00:13:58.350 "block_size": 512, 00:13:58.350 "num_blocks": 65536, 00:13:58.350 "uuid": "f8b8d070-a9d9-4371-af63-83de2c246f6b", 00:13:58.350 "assigned_rate_limits": { 00:13:58.350 "rw_ios_per_sec": 0, 00:13:58.350 "rw_mbytes_per_sec": 0, 00:13:58.350 "r_mbytes_per_sec": 0, 00:13:58.350 "w_mbytes_per_sec": 0 00:13:58.350 }, 00:13:58.350 "claimed": true, 00:13:58.350 "claim_type": "exclusive_write", 00:13:58.350 "zoned": false, 00:13:58.350 "supported_io_types": { 00:13:58.350 "read": true, 00:13:58.350 "write": true, 00:13:58.350 "unmap": true, 00:13:58.350 "flush": true, 00:13:58.350 "reset": true, 00:13:58.350 "nvme_admin": false, 00:13:58.350 "nvme_io": false, 00:13:58.350 "nvme_io_md": false, 00:13:58.350 "write_zeroes": true, 00:13:58.350 "zcopy": true, 00:13:58.350 "get_zone_info": false, 00:13:58.350 "zone_management": false, 00:13:58.350 "zone_append": false, 00:13:58.350 "compare": false, 00:13:58.350 "compare_and_write": false, 00:13:58.350 "abort": true, 00:13:58.350 "seek_hole": false, 00:13:58.350 "seek_data": false, 00:13:58.350 "copy": true, 00:13:58.350 "nvme_iov_md": false 00:13:58.350 }, 00:13:58.350 "memory_domains": [ 00:13:58.350 { 00:13:58.350 "dma_device_id": "system", 00:13:58.350 "dma_device_type": 1 00:13:58.350 }, 00:13:58.350 { 00:13:58.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.350 "dma_device_type": 2 00:13:58.350 } 00:13:58.350 ], 00:13:58.350 "driver_specific": {} 00:13:58.350 } 00:13:58.350 ] 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.350 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.350 "name": "Existed_Raid", 00:13:58.351 "uuid": "17ad08cc-8b11-4877-abb4-b2f40d48e31d", 00:13:58.351 "strip_size_kb": 0, 00:13:58.351 "state": "online", 00:13:58.351 "raid_level": "raid1", 00:13:58.351 "superblock": true, 00:13:58.351 "num_base_bdevs": 4, 00:13:58.351 "num_base_bdevs_discovered": 4, 00:13:58.351 "num_base_bdevs_operational": 4, 00:13:58.351 "base_bdevs_list": [ 00:13:58.351 { 00:13:58.351 "name": "NewBaseBdev", 00:13:58.351 "uuid": "f8b8d070-a9d9-4371-af63-83de2c246f6b", 00:13:58.351 "is_configured": true, 00:13:58.351 "data_offset": 2048, 00:13:58.351 "data_size": 63488 00:13:58.351 }, 00:13:58.351 { 00:13:58.351 "name": "BaseBdev2", 00:13:58.351 "uuid": "fc37a6a0-242d-4151-8f42-984eb357041e", 00:13:58.351 "is_configured": true, 00:13:58.351 "data_offset": 2048, 00:13:58.351 "data_size": 63488 00:13:58.351 }, 00:13:58.351 { 00:13:58.351 "name": "BaseBdev3", 00:13:58.351 "uuid": "30e44f2d-0669-4dbe-8370-6aa0c1498073", 00:13:58.351 "is_configured": true, 00:13:58.351 "data_offset": 2048, 00:13:58.351 "data_size": 63488 00:13:58.351 }, 00:13:58.351 { 00:13:58.351 "name": "BaseBdev4", 00:13:58.351 "uuid": "cd65dd79-c570-4ce2-87d0-3bb0721ccf63", 00:13:58.351 "is_configured": true, 00:13:58.351 "data_offset": 2048, 00:13:58.351 "data_size": 63488 00:13:58.351 } 00:13:58.351 ] 00:13:58.351 }' 00:13:58.351 20:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.351 20:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.920 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:58.920 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:58.920 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:58.920 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:58.920 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:58.920 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:58.920 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:58.920 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.920 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.920 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:58.920 [2024-12-05 20:07:00.090271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.920 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.920 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:58.920 "name": "Existed_Raid", 00:13:58.920 "aliases": [ 00:13:58.920 "17ad08cc-8b11-4877-abb4-b2f40d48e31d" 00:13:58.920 ], 00:13:58.920 "product_name": "Raid Volume", 00:13:58.920 "block_size": 512, 00:13:58.920 "num_blocks": 63488, 00:13:58.920 "uuid": "17ad08cc-8b11-4877-abb4-b2f40d48e31d", 00:13:58.920 "assigned_rate_limits": { 00:13:58.920 "rw_ios_per_sec": 0, 00:13:58.920 "rw_mbytes_per_sec": 0, 00:13:58.920 "r_mbytes_per_sec": 0, 00:13:58.920 "w_mbytes_per_sec": 0 00:13:58.920 }, 00:13:58.920 "claimed": false, 00:13:58.920 "zoned": false, 00:13:58.920 "supported_io_types": { 00:13:58.920 "read": true, 00:13:58.920 "write": true, 00:13:58.920 "unmap": false, 00:13:58.920 "flush": false, 00:13:58.920 "reset": true, 00:13:58.920 "nvme_admin": false, 00:13:58.920 "nvme_io": false, 00:13:58.920 "nvme_io_md": false, 00:13:58.920 "write_zeroes": true, 00:13:58.920 "zcopy": false, 00:13:58.920 "get_zone_info": false, 00:13:58.920 "zone_management": false, 00:13:58.920 "zone_append": false, 00:13:58.920 "compare": false, 00:13:58.920 "compare_and_write": false, 00:13:58.920 "abort": false, 00:13:58.920 "seek_hole": false, 00:13:58.920 "seek_data": false, 00:13:58.920 "copy": false, 00:13:58.920 "nvme_iov_md": false 00:13:58.920 }, 00:13:58.920 "memory_domains": [ 00:13:58.920 { 00:13:58.920 "dma_device_id": "system", 00:13:58.920 "dma_device_type": 1 00:13:58.920 }, 00:13:58.920 { 00:13:58.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.920 "dma_device_type": 2 00:13:58.920 }, 00:13:58.920 { 00:13:58.920 "dma_device_id": "system", 00:13:58.920 "dma_device_type": 1 00:13:58.920 }, 00:13:58.920 { 00:13:58.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.920 "dma_device_type": 2 00:13:58.920 }, 00:13:58.920 { 00:13:58.920 "dma_device_id": "system", 00:13:58.920 "dma_device_type": 1 00:13:58.920 }, 00:13:58.920 { 00:13:58.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.920 "dma_device_type": 2 00:13:58.920 }, 00:13:58.920 { 00:13:58.920 "dma_device_id": "system", 00:13:58.920 "dma_device_type": 1 00:13:58.920 }, 00:13:58.920 { 00:13:58.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.920 "dma_device_type": 2 00:13:58.920 } 00:13:58.920 ], 00:13:58.920 "driver_specific": { 00:13:58.920 "raid": { 00:13:58.920 "uuid": "17ad08cc-8b11-4877-abb4-b2f40d48e31d", 00:13:58.920 "strip_size_kb": 0, 00:13:58.920 "state": "online", 00:13:58.920 "raid_level": "raid1", 00:13:58.920 "superblock": true, 00:13:58.920 "num_base_bdevs": 4, 00:13:58.920 "num_base_bdevs_discovered": 4, 00:13:58.920 "num_base_bdevs_operational": 4, 00:13:58.920 "base_bdevs_list": [ 00:13:58.920 { 00:13:58.920 "name": "NewBaseBdev", 00:13:58.920 "uuid": "f8b8d070-a9d9-4371-af63-83de2c246f6b", 00:13:58.920 "is_configured": true, 00:13:58.920 "data_offset": 2048, 00:13:58.920 "data_size": 63488 00:13:58.920 }, 00:13:58.920 { 00:13:58.920 "name": "BaseBdev2", 00:13:58.920 "uuid": "fc37a6a0-242d-4151-8f42-984eb357041e", 00:13:58.920 "is_configured": true, 00:13:58.920 "data_offset": 2048, 00:13:58.920 "data_size": 63488 00:13:58.920 }, 00:13:58.920 { 00:13:58.920 "name": "BaseBdev3", 00:13:58.920 "uuid": "30e44f2d-0669-4dbe-8370-6aa0c1498073", 00:13:58.920 "is_configured": true, 00:13:58.920 "data_offset": 2048, 00:13:58.920 "data_size": 63488 00:13:58.920 }, 00:13:58.920 { 00:13:58.920 "name": "BaseBdev4", 00:13:58.920 "uuid": "cd65dd79-c570-4ce2-87d0-3bb0721ccf63", 00:13:58.920 "is_configured": true, 00:13:58.920 "data_offset": 2048, 00:13:58.920 "data_size": 63488 00:13:58.920 } 00:13:58.920 ] 00:13:58.920 } 00:13:58.920 } 00:13:58.920 }' 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:58.921 BaseBdev2 00:13:58.921 BaseBdev3 00:13:58.921 BaseBdev4' 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.921 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.180 [2024-12-05 20:07:00.401399] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:59.180 [2024-12-05 20:07:00.401474] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.180 [2024-12-05 20:07:00.401592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.180 [2024-12-05 20:07:00.401913] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.180 [2024-12-05 20:07:00.401940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73985 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73985 ']' 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73985 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73985 00:13:59.180 killing process with pid 73985 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73985' 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73985 00:13:59.180 [2024-12-05 20:07:00.448354] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:59.180 20:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73985 00:13:59.440 [2024-12-05 20:07:00.844230] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:00.820 20:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:00.820 00:14:00.820 real 0m11.479s 00:14:00.820 user 0m18.213s 00:14:00.820 sys 0m2.060s 00:14:00.820 ************************************ 00:14:00.820 END TEST raid_state_function_test_sb 00:14:00.820 ************************************ 00:14:00.820 20:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.820 20:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.820 20:07:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:14:00.820 20:07:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:00.820 20:07:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.820 20:07:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:00.820 ************************************ 00:14:00.821 START TEST raid_superblock_test 00:14:00.821 ************************************ 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:00.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74653 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74653 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74653 ']' 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.821 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.821 [2024-12-05 20:07:02.144280] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:14:00.821 [2024-12-05 20:07:02.144504] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74653 ] 00:14:01.081 [2024-12-05 20:07:02.315423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.081 [2024-12-05 20:07:02.426524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.340 [2024-12-05 20:07:02.629263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:01.340 [2024-12-05 20:07:02.629362] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:01.599 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.599 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:01.599 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:01.599 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:01.599 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:01.599 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:01.599 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:01.599 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:01.599 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:01.599 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:01.599 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:01.600 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.600 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.600 malloc1 00:14:01.600 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.600 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:01.600 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.600 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.860 [2024-12-05 20:07:03.039076] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:01.860 [2024-12-05 20:07:03.039182] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.860 [2024-12-05 20:07:03.039246] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:01.860 [2024-12-05 20:07:03.039293] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.860 [2024-12-05 20:07:03.041744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.860 [2024-12-05 20:07:03.041816] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:01.860 pt1 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.860 malloc2 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.860 [2024-12-05 20:07:03.100797] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:01.860 [2024-12-05 20:07:03.100913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.860 [2024-12-05 20:07:03.100965] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:01.860 [2024-12-05 20:07:03.100976] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.860 [2024-12-05 20:07:03.103299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.860 [2024-12-05 20:07:03.103336] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:01.860 pt2 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.860 malloc3 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.860 [2024-12-05 20:07:03.170105] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:01.860 [2024-12-05 20:07:03.170216] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.860 [2024-12-05 20:07:03.170263] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:01.860 [2024-12-05 20:07:03.170302] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.860 [2024-12-05 20:07:03.172659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.860 [2024-12-05 20:07:03.172736] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:01.860 pt3 00:14:01.860 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.861 malloc4 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.861 [2024-12-05 20:07:03.229855] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:01.861 [2024-12-05 20:07:03.230019] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.861 [2024-12-05 20:07:03.230072] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:01.861 [2024-12-05 20:07:03.230112] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.861 [2024-12-05 20:07:03.232474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.861 [2024-12-05 20:07:03.232549] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:01.861 pt4 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.861 [2024-12-05 20:07:03.241865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:01.861 [2024-12-05 20:07:03.243835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:01.861 [2024-12-05 20:07:03.243962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:01.861 [2024-12-05 20:07:03.244077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:01.861 [2024-12-05 20:07:03.244345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:01.861 [2024-12-05 20:07:03.244414] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:01.861 [2024-12-05 20:07:03.244729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:01.861 [2024-12-05 20:07:03.244989] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:01.861 [2024-12-05 20:07:03.245056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:01.861 [2024-12-05 20:07:03.245261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.861 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.120 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.120 "name": "raid_bdev1", 00:14:02.120 "uuid": "e5c306b0-438b-4c00-8d55-ebb5f54a9755", 00:14:02.120 "strip_size_kb": 0, 00:14:02.120 "state": "online", 00:14:02.120 "raid_level": "raid1", 00:14:02.120 "superblock": true, 00:14:02.120 "num_base_bdevs": 4, 00:14:02.120 "num_base_bdevs_discovered": 4, 00:14:02.120 "num_base_bdevs_operational": 4, 00:14:02.120 "base_bdevs_list": [ 00:14:02.120 { 00:14:02.120 "name": "pt1", 00:14:02.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:02.120 "is_configured": true, 00:14:02.120 "data_offset": 2048, 00:14:02.120 "data_size": 63488 00:14:02.120 }, 00:14:02.120 { 00:14:02.120 "name": "pt2", 00:14:02.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.120 "is_configured": true, 00:14:02.120 "data_offset": 2048, 00:14:02.120 "data_size": 63488 00:14:02.120 }, 00:14:02.120 { 00:14:02.120 "name": "pt3", 00:14:02.120 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.120 "is_configured": true, 00:14:02.120 "data_offset": 2048, 00:14:02.120 "data_size": 63488 00:14:02.120 }, 00:14:02.120 { 00:14:02.120 "name": "pt4", 00:14:02.121 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:02.121 "is_configured": true, 00:14:02.121 "data_offset": 2048, 00:14:02.121 "data_size": 63488 00:14:02.121 } 00:14:02.121 ] 00:14:02.121 }' 00:14:02.121 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.121 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.379 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:02.379 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:02.379 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:02.380 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:02.380 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:02.380 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:02.380 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:02.380 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:02.380 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.380 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.380 [2024-12-05 20:07:03.729427] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.380 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.380 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:02.380 "name": "raid_bdev1", 00:14:02.380 "aliases": [ 00:14:02.380 "e5c306b0-438b-4c00-8d55-ebb5f54a9755" 00:14:02.380 ], 00:14:02.380 "product_name": "Raid Volume", 00:14:02.380 "block_size": 512, 00:14:02.380 "num_blocks": 63488, 00:14:02.380 "uuid": "e5c306b0-438b-4c00-8d55-ebb5f54a9755", 00:14:02.380 "assigned_rate_limits": { 00:14:02.380 "rw_ios_per_sec": 0, 00:14:02.380 "rw_mbytes_per_sec": 0, 00:14:02.380 "r_mbytes_per_sec": 0, 00:14:02.380 "w_mbytes_per_sec": 0 00:14:02.380 }, 00:14:02.380 "claimed": false, 00:14:02.380 "zoned": false, 00:14:02.380 "supported_io_types": { 00:14:02.380 "read": true, 00:14:02.380 "write": true, 00:14:02.380 "unmap": false, 00:14:02.380 "flush": false, 00:14:02.380 "reset": true, 00:14:02.380 "nvme_admin": false, 00:14:02.380 "nvme_io": false, 00:14:02.380 "nvme_io_md": false, 00:14:02.380 "write_zeroes": true, 00:14:02.380 "zcopy": false, 00:14:02.380 "get_zone_info": false, 00:14:02.380 "zone_management": false, 00:14:02.380 "zone_append": false, 00:14:02.380 "compare": false, 00:14:02.380 "compare_and_write": false, 00:14:02.380 "abort": false, 00:14:02.380 "seek_hole": false, 00:14:02.380 "seek_data": false, 00:14:02.380 "copy": false, 00:14:02.380 "nvme_iov_md": false 00:14:02.380 }, 00:14:02.380 "memory_domains": [ 00:14:02.380 { 00:14:02.380 "dma_device_id": "system", 00:14:02.380 "dma_device_type": 1 00:14:02.380 }, 00:14:02.380 { 00:14:02.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.380 "dma_device_type": 2 00:14:02.380 }, 00:14:02.380 { 00:14:02.380 "dma_device_id": "system", 00:14:02.380 "dma_device_type": 1 00:14:02.380 }, 00:14:02.380 { 00:14:02.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.380 "dma_device_type": 2 00:14:02.380 }, 00:14:02.380 { 00:14:02.380 "dma_device_id": "system", 00:14:02.380 "dma_device_type": 1 00:14:02.380 }, 00:14:02.380 { 00:14:02.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.380 "dma_device_type": 2 00:14:02.380 }, 00:14:02.380 { 00:14:02.380 "dma_device_id": "system", 00:14:02.380 "dma_device_type": 1 00:14:02.380 }, 00:14:02.380 { 00:14:02.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.380 "dma_device_type": 2 00:14:02.380 } 00:14:02.380 ], 00:14:02.380 "driver_specific": { 00:14:02.380 "raid": { 00:14:02.380 "uuid": "e5c306b0-438b-4c00-8d55-ebb5f54a9755", 00:14:02.380 "strip_size_kb": 0, 00:14:02.380 "state": "online", 00:14:02.380 "raid_level": "raid1", 00:14:02.380 "superblock": true, 00:14:02.380 "num_base_bdevs": 4, 00:14:02.380 "num_base_bdevs_discovered": 4, 00:14:02.380 "num_base_bdevs_operational": 4, 00:14:02.380 "base_bdevs_list": [ 00:14:02.380 { 00:14:02.380 "name": "pt1", 00:14:02.380 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:02.380 "is_configured": true, 00:14:02.380 "data_offset": 2048, 00:14:02.380 "data_size": 63488 00:14:02.380 }, 00:14:02.380 { 00:14:02.380 "name": "pt2", 00:14:02.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.380 "is_configured": true, 00:14:02.380 "data_offset": 2048, 00:14:02.380 "data_size": 63488 00:14:02.380 }, 00:14:02.380 { 00:14:02.380 "name": "pt3", 00:14:02.380 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.380 "is_configured": true, 00:14:02.380 "data_offset": 2048, 00:14:02.380 "data_size": 63488 00:14:02.380 }, 00:14:02.380 { 00:14:02.380 "name": "pt4", 00:14:02.380 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:02.380 "is_configured": true, 00:14:02.380 "data_offset": 2048, 00:14:02.380 "data_size": 63488 00:14:02.380 } 00:14:02.380 ] 00:14:02.380 } 00:14:02.380 } 00:14:02.380 }' 00:14:02.380 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:02.640 pt2 00:14:02.640 pt3 00:14:02.640 pt4' 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.640 20:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.640 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.640 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:02.640 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.640 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.640 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.640 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.640 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.640 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:02.640 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.640 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.640 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:02.640 [2024-12-05 20:07:04.040849] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.640 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.640 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e5c306b0-438b-4c00-8d55-ebb5f54a9755 00:14:02.640 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e5c306b0-438b-4c00-8d55-ebb5f54a9755 ']' 00:14:02.640 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:02.640 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.640 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.900 [2024-12-05 20:07:04.080462] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:02.900 [2024-12-05 20:07:04.080537] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.900 [2024-12-05 20:07:04.080663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.900 [2024-12-05 20:07:04.080812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.900 [2024-12-05 20:07:04.080880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.900 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.900 [2024-12-05 20:07:04.244189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:02.900 [2024-12-05 20:07:04.246357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:02.900 [2024-12-05 20:07:04.246413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:02.900 [2024-12-05 20:07:04.246454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:02.900 [2024-12-05 20:07:04.246509] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:02.900 [2024-12-05 20:07:04.246570] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:02.900 [2024-12-05 20:07:04.246593] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:02.900 [2024-12-05 20:07:04.246613] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:02.900 [2024-12-05 20:07:04.246627] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:02.900 [2024-12-05 20:07:04.246640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:02.900 request: 00:14:02.900 { 00:14:02.900 "name": "raid_bdev1", 00:14:02.900 "raid_level": "raid1", 00:14:02.900 "base_bdevs": [ 00:14:02.900 "malloc1", 00:14:02.900 "malloc2", 00:14:02.900 "malloc3", 00:14:02.900 "malloc4" 00:14:02.900 ], 00:14:02.900 "superblock": false, 00:14:02.900 "method": "bdev_raid_create", 00:14:02.900 "req_id": 1 00:14:02.900 } 00:14:02.900 Got JSON-RPC error response 00:14:02.900 response: 00:14:02.900 { 00:14:02.900 "code": -17, 00:14:02.900 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:02.900 } 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.901 [2024-12-05 20:07:04.312063] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:02.901 [2024-12-05 20:07:04.312195] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.901 [2024-12-05 20:07:04.312231] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:02.901 [2024-12-05 20:07:04.312261] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.901 [2024-12-05 20:07:04.314676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.901 [2024-12-05 20:07:04.314781] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:02.901 [2024-12-05 20:07:04.314922] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:02.901 [2024-12-05 20:07:04.315063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:02.901 pt1 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.901 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.160 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.160 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.160 "name": "raid_bdev1", 00:14:03.160 "uuid": "e5c306b0-438b-4c00-8d55-ebb5f54a9755", 00:14:03.160 "strip_size_kb": 0, 00:14:03.160 "state": "configuring", 00:14:03.160 "raid_level": "raid1", 00:14:03.160 "superblock": true, 00:14:03.160 "num_base_bdevs": 4, 00:14:03.160 "num_base_bdevs_discovered": 1, 00:14:03.160 "num_base_bdevs_operational": 4, 00:14:03.160 "base_bdevs_list": [ 00:14:03.160 { 00:14:03.160 "name": "pt1", 00:14:03.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:03.160 "is_configured": true, 00:14:03.160 "data_offset": 2048, 00:14:03.160 "data_size": 63488 00:14:03.160 }, 00:14:03.160 { 00:14:03.160 "name": null, 00:14:03.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.160 "is_configured": false, 00:14:03.160 "data_offset": 2048, 00:14:03.160 "data_size": 63488 00:14:03.160 }, 00:14:03.160 { 00:14:03.160 "name": null, 00:14:03.160 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.160 "is_configured": false, 00:14:03.160 "data_offset": 2048, 00:14:03.160 "data_size": 63488 00:14:03.160 }, 00:14:03.160 { 00:14:03.160 "name": null, 00:14:03.160 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:03.160 "is_configured": false, 00:14:03.160 "data_offset": 2048, 00:14:03.160 "data_size": 63488 00:14:03.160 } 00:14:03.160 ] 00:14:03.160 }' 00:14:03.160 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.160 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.419 [2024-12-05 20:07:04.787258] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:03.419 [2024-12-05 20:07:04.787380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.419 [2024-12-05 20:07:04.787409] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:03.419 [2024-12-05 20:07:04.787421] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.419 [2024-12-05 20:07:04.787874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.419 [2024-12-05 20:07:04.787915] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:03.419 [2024-12-05 20:07:04.788027] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:03.419 [2024-12-05 20:07:04.788056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:03.419 pt2 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.419 [2024-12-05 20:07:04.795236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.419 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.419 "name": "raid_bdev1", 00:14:03.419 "uuid": "e5c306b0-438b-4c00-8d55-ebb5f54a9755", 00:14:03.420 "strip_size_kb": 0, 00:14:03.420 "state": "configuring", 00:14:03.420 "raid_level": "raid1", 00:14:03.420 "superblock": true, 00:14:03.420 "num_base_bdevs": 4, 00:14:03.420 "num_base_bdevs_discovered": 1, 00:14:03.420 "num_base_bdevs_operational": 4, 00:14:03.420 "base_bdevs_list": [ 00:14:03.420 { 00:14:03.420 "name": "pt1", 00:14:03.420 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:03.420 "is_configured": true, 00:14:03.420 "data_offset": 2048, 00:14:03.420 "data_size": 63488 00:14:03.420 }, 00:14:03.420 { 00:14:03.420 "name": null, 00:14:03.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.420 "is_configured": false, 00:14:03.420 "data_offset": 0, 00:14:03.420 "data_size": 63488 00:14:03.420 }, 00:14:03.420 { 00:14:03.420 "name": null, 00:14:03.420 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.420 "is_configured": false, 00:14:03.420 "data_offset": 2048, 00:14:03.420 "data_size": 63488 00:14:03.420 }, 00:14:03.420 { 00:14:03.420 "name": null, 00:14:03.420 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:03.420 "is_configured": false, 00:14:03.420 "data_offset": 2048, 00:14:03.420 "data_size": 63488 00:14:03.420 } 00:14:03.420 ] 00:14:03.420 }' 00:14:03.420 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.420 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.990 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:03.990 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:03.990 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:03.990 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.990 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.990 [2024-12-05 20:07:05.246475] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:03.990 [2024-12-05 20:07:05.246625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.990 [2024-12-05 20:07:05.246652] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:03.990 [2024-12-05 20:07:05.246661] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.990 [2024-12-05 20:07:05.247140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.990 [2024-12-05 20:07:05.247160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:03.990 [2024-12-05 20:07:05.247265] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:03.990 [2024-12-05 20:07:05.247289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:03.990 pt2 00:14:03.990 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.990 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:03.990 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:03.990 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:03.990 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.990 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.990 [2024-12-05 20:07:05.254449] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:03.990 [2024-12-05 20:07:05.254507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.990 [2024-12-05 20:07:05.254529] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:03.990 [2024-12-05 20:07:05.254539] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.990 [2024-12-05 20:07:05.254975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.990 [2024-12-05 20:07:05.255000] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:03.990 [2024-12-05 20:07:05.255117] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:03.990 [2024-12-05 20:07:05.255138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:03.990 pt3 00:14:03.990 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.990 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:03.990 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:03.990 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:03.990 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.990 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.990 [2024-12-05 20:07:05.262394] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:03.990 [2024-12-05 20:07:05.262439] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.990 [2024-12-05 20:07:05.262456] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:03.990 [2024-12-05 20:07:05.262464] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.990 [2024-12-05 20:07:05.262830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.990 [2024-12-05 20:07:05.262845] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:03.990 [2024-12-05 20:07:05.262924] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:03.990 [2024-12-05 20:07:05.262950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:03.990 [2024-12-05 20:07:05.263091] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:03.990 [2024-12-05 20:07:05.263105] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:03.990 [2024-12-05 20:07:05.263336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:03.990 [2024-12-05 20:07:05.263488] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:03.990 [2024-12-05 20:07:05.263500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:03.991 [2024-12-05 20:07:05.263636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.991 pt4 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.991 "name": "raid_bdev1", 00:14:03.991 "uuid": "e5c306b0-438b-4c00-8d55-ebb5f54a9755", 00:14:03.991 "strip_size_kb": 0, 00:14:03.991 "state": "online", 00:14:03.991 "raid_level": "raid1", 00:14:03.991 "superblock": true, 00:14:03.991 "num_base_bdevs": 4, 00:14:03.991 "num_base_bdevs_discovered": 4, 00:14:03.991 "num_base_bdevs_operational": 4, 00:14:03.991 "base_bdevs_list": [ 00:14:03.991 { 00:14:03.991 "name": "pt1", 00:14:03.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:03.991 "is_configured": true, 00:14:03.991 "data_offset": 2048, 00:14:03.991 "data_size": 63488 00:14:03.991 }, 00:14:03.991 { 00:14:03.991 "name": "pt2", 00:14:03.991 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.991 "is_configured": true, 00:14:03.991 "data_offset": 2048, 00:14:03.991 "data_size": 63488 00:14:03.991 }, 00:14:03.991 { 00:14:03.991 "name": "pt3", 00:14:03.991 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.991 "is_configured": true, 00:14:03.991 "data_offset": 2048, 00:14:03.991 "data_size": 63488 00:14:03.991 }, 00:14:03.991 { 00:14:03.991 "name": "pt4", 00:14:03.991 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:03.991 "is_configured": true, 00:14:03.991 "data_offset": 2048, 00:14:03.991 "data_size": 63488 00:14:03.991 } 00:14:03.991 ] 00:14:03.991 }' 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.991 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.629 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:04.629 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:04.629 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:04.629 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:04.629 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:04.629 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:04.629 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:04.629 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.629 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.629 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:04.629 [2024-12-05 20:07:05.781933] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.629 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.629 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:04.629 "name": "raid_bdev1", 00:14:04.629 "aliases": [ 00:14:04.629 "e5c306b0-438b-4c00-8d55-ebb5f54a9755" 00:14:04.629 ], 00:14:04.629 "product_name": "Raid Volume", 00:14:04.629 "block_size": 512, 00:14:04.629 "num_blocks": 63488, 00:14:04.629 "uuid": "e5c306b0-438b-4c00-8d55-ebb5f54a9755", 00:14:04.629 "assigned_rate_limits": { 00:14:04.629 "rw_ios_per_sec": 0, 00:14:04.629 "rw_mbytes_per_sec": 0, 00:14:04.629 "r_mbytes_per_sec": 0, 00:14:04.629 "w_mbytes_per_sec": 0 00:14:04.629 }, 00:14:04.629 "claimed": false, 00:14:04.629 "zoned": false, 00:14:04.629 "supported_io_types": { 00:14:04.629 "read": true, 00:14:04.629 "write": true, 00:14:04.629 "unmap": false, 00:14:04.629 "flush": false, 00:14:04.629 "reset": true, 00:14:04.629 "nvme_admin": false, 00:14:04.629 "nvme_io": false, 00:14:04.629 "nvme_io_md": false, 00:14:04.629 "write_zeroes": true, 00:14:04.629 "zcopy": false, 00:14:04.629 "get_zone_info": false, 00:14:04.629 "zone_management": false, 00:14:04.629 "zone_append": false, 00:14:04.629 "compare": false, 00:14:04.629 "compare_and_write": false, 00:14:04.629 "abort": false, 00:14:04.629 "seek_hole": false, 00:14:04.629 "seek_data": false, 00:14:04.629 "copy": false, 00:14:04.629 "nvme_iov_md": false 00:14:04.629 }, 00:14:04.629 "memory_domains": [ 00:14:04.629 { 00:14:04.629 "dma_device_id": "system", 00:14:04.629 "dma_device_type": 1 00:14:04.629 }, 00:14:04.629 { 00:14:04.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.629 "dma_device_type": 2 00:14:04.629 }, 00:14:04.629 { 00:14:04.629 "dma_device_id": "system", 00:14:04.629 "dma_device_type": 1 00:14:04.629 }, 00:14:04.629 { 00:14:04.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.629 "dma_device_type": 2 00:14:04.629 }, 00:14:04.629 { 00:14:04.629 "dma_device_id": "system", 00:14:04.629 "dma_device_type": 1 00:14:04.629 }, 00:14:04.629 { 00:14:04.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.629 "dma_device_type": 2 00:14:04.629 }, 00:14:04.629 { 00:14:04.629 "dma_device_id": "system", 00:14:04.629 "dma_device_type": 1 00:14:04.629 }, 00:14:04.629 { 00:14:04.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.629 "dma_device_type": 2 00:14:04.630 } 00:14:04.630 ], 00:14:04.630 "driver_specific": { 00:14:04.630 "raid": { 00:14:04.630 "uuid": "e5c306b0-438b-4c00-8d55-ebb5f54a9755", 00:14:04.630 "strip_size_kb": 0, 00:14:04.630 "state": "online", 00:14:04.630 "raid_level": "raid1", 00:14:04.630 "superblock": true, 00:14:04.630 "num_base_bdevs": 4, 00:14:04.630 "num_base_bdevs_discovered": 4, 00:14:04.630 "num_base_bdevs_operational": 4, 00:14:04.630 "base_bdevs_list": [ 00:14:04.630 { 00:14:04.630 "name": "pt1", 00:14:04.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:04.630 "is_configured": true, 00:14:04.630 "data_offset": 2048, 00:14:04.630 "data_size": 63488 00:14:04.630 }, 00:14:04.630 { 00:14:04.630 "name": "pt2", 00:14:04.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.630 "is_configured": true, 00:14:04.630 "data_offset": 2048, 00:14:04.630 "data_size": 63488 00:14:04.630 }, 00:14:04.630 { 00:14:04.630 "name": "pt3", 00:14:04.630 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.630 "is_configured": true, 00:14:04.630 "data_offset": 2048, 00:14:04.630 "data_size": 63488 00:14:04.630 }, 00:14:04.630 { 00:14:04.630 "name": "pt4", 00:14:04.630 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:04.630 "is_configured": true, 00:14:04.630 "data_offset": 2048, 00:14:04.630 "data_size": 63488 00:14:04.630 } 00:14:04.630 ] 00:14:04.630 } 00:14:04.630 } 00:14:04.630 }' 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:04.630 pt2 00:14:04.630 pt3 00:14:04.630 pt4' 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.630 20:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.630 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.630 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.630 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.630 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:04.630 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.630 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.630 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.630 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.890 [2024-12-05 20:07:06.129340] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e5c306b0-438b-4c00-8d55-ebb5f54a9755 '!=' e5c306b0-438b-4c00-8d55-ebb5f54a9755 ']' 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.890 [2024-12-05 20:07:06.161014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.890 "name": "raid_bdev1", 00:14:04.890 "uuid": "e5c306b0-438b-4c00-8d55-ebb5f54a9755", 00:14:04.890 "strip_size_kb": 0, 00:14:04.890 "state": "online", 00:14:04.890 "raid_level": "raid1", 00:14:04.890 "superblock": true, 00:14:04.890 "num_base_bdevs": 4, 00:14:04.890 "num_base_bdevs_discovered": 3, 00:14:04.890 "num_base_bdevs_operational": 3, 00:14:04.890 "base_bdevs_list": [ 00:14:04.890 { 00:14:04.890 "name": null, 00:14:04.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.890 "is_configured": false, 00:14:04.890 "data_offset": 0, 00:14:04.890 "data_size": 63488 00:14:04.890 }, 00:14:04.890 { 00:14:04.890 "name": "pt2", 00:14:04.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.890 "is_configured": true, 00:14:04.890 "data_offset": 2048, 00:14:04.890 "data_size": 63488 00:14:04.890 }, 00:14:04.890 { 00:14:04.890 "name": "pt3", 00:14:04.890 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.890 "is_configured": true, 00:14:04.890 "data_offset": 2048, 00:14:04.890 "data_size": 63488 00:14:04.890 }, 00:14:04.890 { 00:14:04.890 "name": "pt4", 00:14:04.890 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:04.890 "is_configured": true, 00:14:04.890 "data_offset": 2048, 00:14:04.890 "data_size": 63488 00:14:04.890 } 00:14:04.890 ] 00:14:04.890 }' 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.890 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.457 [2024-12-05 20:07:06.596208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.457 [2024-12-05 20:07:06.596245] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:05.457 [2024-12-05 20:07:06.596340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.457 [2024-12-05 20:07:06.596433] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.457 [2024-12-05 20:07:06.596445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.457 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.457 [2024-12-05 20:07:06.696047] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:05.457 [2024-12-05 20:07:06.696112] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.457 [2024-12-05 20:07:06.696136] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:05.457 [2024-12-05 20:07:06.696146] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.458 [2024-12-05 20:07:06.698679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.458 [2024-12-05 20:07:06.698720] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:05.458 [2024-12-05 20:07:06.698817] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:05.458 [2024-12-05 20:07:06.698865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:05.458 pt2 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.458 "name": "raid_bdev1", 00:14:05.458 "uuid": "e5c306b0-438b-4c00-8d55-ebb5f54a9755", 00:14:05.458 "strip_size_kb": 0, 00:14:05.458 "state": "configuring", 00:14:05.458 "raid_level": "raid1", 00:14:05.458 "superblock": true, 00:14:05.458 "num_base_bdevs": 4, 00:14:05.458 "num_base_bdevs_discovered": 1, 00:14:05.458 "num_base_bdevs_operational": 3, 00:14:05.458 "base_bdevs_list": [ 00:14:05.458 { 00:14:05.458 "name": null, 00:14:05.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.458 "is_configured": false, 00:14:05.458 "data_offset": 2048, 00:14:05.458 "data_size": 63488 00:14:05.458 }, 00:14:05.458 { 00:14:05.458 "name": "pt2", 00:14:05.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.458 "is_configured": true, 00:14:05.458 "data_offset": 2048, 00:14:05.458 "data_size": 63488 00:14:05.458 }, 00:14:05.458 { 00:14:05.458 "name": null, 00:14:05.458 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.458 "is_configured": false, 00:14:05.458 "data_offset": 2048, 00:14:05.458 "data_size": 63488 00:14:05.458 }, 00:14:05.458 { 00:14:05.458 "name": null, 00:14:05.458 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:05.458 "is_configured": false, 00:14:05.458 "data_offset": 2048, 00:14:05.458 "data_size": 63488 00:14:05.458 } 00:14:05.458 ] 00:14:05.458 }' 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.458 20:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.717 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:05.717 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:05.717 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:05.717 20:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.717 20:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.976 [2024-12-05 20:07:07.155282] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:05.976 [2024-12-05 20:07:07.155412] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.976 [2024-12-05 20:07:07.155466] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:05.976 [2024-12-05 20:07:07.155499] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.976 [2024-12-05 20:07:07.156070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.976 [2024-12-05 20:07:07.156142] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:05.976 [2024-12-05 20:07:07.156270] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:05.976 [2024-12-05 20:07:07.156326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:05.976 pt3 00:14:05.976 20:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.976 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:05.976 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.976 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.976 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.976 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.976 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.976 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.976 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.976 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.976 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.976 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.976 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.976 20:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.976 20:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.976 20:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.976 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.976 "name": "raid_bdev1", 00:14:05.976 "uuid": "e5c306b0-438b-4c00-8d55-ebb5f54a9755", 00:14:05.976 "strip_size_kb": 0, 00:14:05.976 "state": "configuring", 00:14:05.976 "raid_level": "raid1", 00:14:05.976 "superblock": true, 00:14:05.976 "num_base_bdevs": 4, 00:14:05.976 "num_base_bdevs_discovered": 2, 00:14:05.976 "num_base_bdevs_operational": 3, 00:14:05.976 "base_bdevs_list": [ 00:14:05.976 { 00:14:05.976 "name": null, 00:14:05.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.976 "is_configured": false, 00:14:05.976 "data_offset": 2048, 00:14:05.976 "data_size": 63488 00:14:05.976 }, 00:14:05.976 { 00:14:05.976 "name": "pt2", 00:14:05.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.976 "is_configured": true, 00:14:05.976 "data_offset": 2048, 00:14:05.976 "data_size": 63488 00:14:05.976 }, 00:14:05.976 { 00:14:05.976 "name": "pt3", 00:14:05.976 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.976 "is_configured": true, 00:14:05.976 "data_offset": 2048, 00:14:05.976 "data_size": 63488 00:14:05.976 }, 00:14:05.976 { 00:14:05.976 "name": null, 00:14:05.976 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:05.977 "is_configured": false, 00:14:05.977 "data_offset": 2048, 00:14:05.977 "data_size": 63488 00:14:05.977 } 00:14:05.977 ] 00:14:05.977 }' 00:14:05.977 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.977 20:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.235 [2024-12-05 20:07:07.630505] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:06.235 [2024-12-05 20:07:07.630586] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.235 [2024-12-05 20:07:07.630618] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:06.235 [2024-12-05 20:07:07.630628] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.235 [2024-12-05 20:07:07.631153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.235 [2024-12-05 20:07:07.631181] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:06.235 [2024-12-05 20:07:07.631284] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:06.235 [2024-12-05 20:07:07.631308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:06.235 [2024-12-05 20:07:07.631466] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:06.235 [2024-12-05 20:07:07.631476] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:06.235 [2024-12-05 20:07:07.631743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:06.235 [2024-12-05 20:07:07.631917] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:06.235 [2024-12-05 20:07:07.631932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:06.235 [2024-12-05 20:07:07.632093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.235 pt4 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.235 20:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.493 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.493 "name": "raid_bdev1", 00:14:06.493 "uuid": "e5c306b0-438b-4c00-8d55-ebb5f54a9755", 00:14:06.493 "strip_size_kb": 0, 00:14:06.493 "state": "online", 00:14:06.493 "raid_level": "raid1", 00:14:06.493 "superblock": true, 00:14:06.493 "num_base_bdevs": 4, 00:14:06.493 "num_base_bdevs_discovered": 3, 00:14:06.493 "num_base_bdevs_operational": 3, 00:14:06.493 "base_bdevs_list": [ 00:14:06.493 { 00:14:06.493 "name": null, 00:14:06.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.493 "is_configured": false, 00:14:06.493 "data_offset": 2048, 00:14:06.493 "data_size": 63488 00:14:06.493 }, 00:14:06.493 { 00:14:06.493 "name": "pt2", 00:14:06.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.493 "is_configured": true, 00:14:06.493 "data_offset": 2048, 00:14:06.493 "data_size": 63488 00:14:06.493 }, 00:14:06.493 { 00:14:06.493 "name": "pt3", 00:14:06.493 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.493 "is_configured": true, 00:14:06.493 "data_offset": 2048, 00:14:06.493 "data_size": 63488 00:14:06.493 }, 00:14:06.493 { 00:14:06.493 "name": "pt4", 00:14:06.493 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:06.493 "is_configured": true, 00:14:06.493 "data_offset": 2048, 00:14:06.493 "data_size": 63488 00:14:06.493 } 00:14:06.493 ] 00:14:06.493 }' 00:14:06.493 20:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.493 20:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.752 [2024-12-05 20:07:08.097649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.752 [2024-12-05 20:07:08.097731] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.752 [2024-12-05 20:07:08.097843] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.752 [2024-12-05 20:07:08.097954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.752 [2024-12-05 20:07:08.098023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.752 [2024-12-05 20:07:08.173516] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:06.752 [2024-12-05 20:07:08.173583] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.752 [2024-12-05 20:07:08.173603] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:06.752 [2024-12-05 20:07:08.173615] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.752 [2024-12-05 20:07:08.175947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.752 [2024-12-05 20:07:08.176027] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:06.752 [2024-12-05 20:07:08.176122] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:06.752 [2024-12-05 20:07:08.176174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:06.752 [2024-12-05 20:07:08.176316] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:06.752 [2024-12-05 20:07:08.176330] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.752 [2024-12-05 20:07:08.176347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:06.752 [2024-12-05 20:07:08.176431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:06.752 [2024-12-05 20:07:08.176545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:06.752 pt1 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.752 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.011 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.011 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.011 "name": "raid_bdev1", 00:14:07.011 "uuid": "e5c306b0-438b-4c00-8d55-ebb5f54a9755", 00:14:07.011 "strip_size_kb": 0, 00:14:07.011 "state": "configuring", 00:14:07.011 "raid_level": "raid1", 00:14:07.011 "superblock": true, 00:14:07.011 "num_base_bdevs": 4, 00:14:07.011 "num_base_bdevs_discovered": 2, 00:14:07.011 "num_base_bdevs_operational": 3, 00:14:07.011 "base_bdevs_list": [ 00:14:07.011 { 00:14:07.011 "name": null, 00:14:07.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.011 "is_configured": false, 00:14:07.011 "data_offset": 2048, 00:14:07.011 "data_size": 63488 00:14:07.011 }, 00:14:07.011 { 00:14:07.011 "name": "pt2", 00:14:07.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.011 "is_configured": true, 00:14:07.011 "data_offset": 2048, 00:14:07.011 "data_size": 63488 00:14:07.011 }, 00:14:07.011 { 00:14:07.011 "name": "pt3", 00:14:07.011 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.011 "is_configured": true, 00:14:07.011 "data_offset": 2048, 00:14:07.011 "data_size": 63488 00:14:07.011 }, 00:14:07.011 { 00:14:07.011 "name": null, 00:14:07.011 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:07.011 "is_configured": false, 00:14:07.011 "data_offset": 2048, 00:14:07.011 "data_size": 63488 00:14:07.011 } 00:14:07.011 ] 00:14:07.011 }' 00:14:07.011 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.011 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.270 [2024-12-05 20:07:08.656715] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:07.270 [2024-12-05 20:07:08.656845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.270 [2024-12-05 20:07:08.656902] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:07.270 [2024-12-05 20:07:08.656948] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.270 [2024-12-05 20:07:08.657460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.270 [2024-12-05 20:07:08.657523] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:07.270 [2024-12-05 20:07:08.657648] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:07.270 [2024-12-05 20:07:08.657701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:07.270 [2024-12-05 20:07:08.657898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:07.270 [2024-12-05 20:07:08.657940] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:07.270 [2024-12-05 20:07:08.658236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:07.270 [2024-12-05 20:07:08.658430] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:07.270 [2024-12-05 20:07:08.658474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:07.270 [2024-12-05 20:07:08.658670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.270 pt4 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.270 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.529 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.529 "name": "raid_bdev1", 00:14:07.529 "uuid": "e5c306b0-438b-4c00-8d55-ebb5f54a9755", 00:14:07.529 "strip_size_kb": 0, 00:14:07.529 "state": "online", 00:14:07.529 "raid_level": "raid1", 00:14:07.529 "superblock": true, 00:14:07.529 "num_base_bdevs": 4, 00:14:07.529 "num_base_bdevs_discovered": 3, 00:14:07.529 "num_base_bdevs_operational": 3, 00:14:07.529 "base_bdevs_list": [ 00:14:07.529 { 00:14:07.529 "name": null, 00:14:07.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.529 "is_configured": false, 00:14:07.529 "data_offset": 2048, 00:14:07.529 "data_size": 63488 00:14:07.529 }, 00:14:07.529 { 00:14:07.529 "name": "pt2", 00:14:07.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.529 "is_configured": true, 00:14:07.529 "data_offset": 2048, 00:14:07.529 "data_size": 63488 00:14:07.529 }, 00:14:07.529 { 00:14:07.529 "name": "pt3", 00:14:07.529 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.529 "is_configured": true, 00:14:07.529 "data_offset": 2048, 00:14:07.529 "data_size": 63488 00:14:07.529 }, 00:14:07.529 { 00:14:07.529 "name": "pt4", 00:14:07.529 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:07.529 "is_configured": true, 00:14:07.529 "data_offset": 2048, 00:14:07.529 "data_size": 63488 00:14:07.529 } 00:14:07.529 ] 00:14:07.529 }' 00:14:07.529 20:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.529 20:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.787 20:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:07.787 20:07:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.787 20:07:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.787 20:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:07.787 20:07:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.787 20:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:07.787 20:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:07.787 20:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:07.787 20:07:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.787 20:07:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.787 [2024-12-05 20:07:09.188148] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.787 20:07:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.046 20:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e5c306b0-438b-4c00-8d55-ebb5f54a9755 '!=' e5c306b0-438b-4c00-8d55-ebb5f54a9755 ']' 00:14:08.046 20:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74653 00:14:08.046 20:07:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74653 ']' 00:14:08.046 20:07:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74653 00:14:08.046 20:07:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:08.046 20:07:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.046 20:07:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74653 00:14:08.046 killing process with pid 74653 00:14:08.046 20:07:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:08.046 20:07:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:08.046 20:07:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74653' 00:14:08.046 20:07:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74653 00:14:08.046 [2024-12-05 20:07:09.254155] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:08.046 [2024-12-05 20:07:09.254261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.046 20:07:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74653 00:14:08.046 [2024-12-05 20:07:09.254345] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.046 [2024-12-05 20:07:09.254359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:08.305 [2024-12-05 20:07:09.663294] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:09.686 20:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:09.686 ************************************ 00:14:09.686 END TEST raid_superblock_test 00:14:09.686 ************************************ 00:14:09.686 00:14:09.686 real 0m8.741s 00:14:09.686 user 0m13.768s 00:14:09.686 sys 0m1.604s 00:14:09.686 20:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.686 20:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.686 20:07:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:14:09.686 20:07:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:09.686 20:07:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.686 20:07:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:09.686 ************************************ 00:14:09.686 START TEST raid_read_error_test 00:14:09.686 ************************************ 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LYHDNTnxst 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75144 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75144 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75144 ']' 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.686 20:07:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.686 [2024-12-05 20:07:10.966270] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:14:09.686 [2024-12-05 20:07:10.966492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75144 ] 00:14:09.946 [2024-12-05 20:07:11.140059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.946 [2024-12-05 20:07:11.253950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.206 [2024-12-05 20:07:11.456234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.206 [2024-12-05 20:07:11.456272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.466 BaseBdev1_malloc 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.466 true 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.466 [2024-12-05 20:07:11.849691] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:10.466 [2024-12-05 20:07:11.849807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.466 [2024-12-05 20:07:11.849834] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:10.466 [2024-12-05 20:07:11.849845] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.466 [2024-12-05 20:07:11.851935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.466 [2024-12-05 20:07:11.851975] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:10.466 BaseBdev1 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.466 BaseBdev2_malloc 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.466 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.727 true 00:14:10.727 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.727 20:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:10.727 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.727 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.727 [2024-12-05 20:07:11.918177] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:10.727 [2024-12-05 20:07:11.918272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.727 [2024-12-05 20:07:11.918293] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:10.727 [2024-12-05 20:07:11.918303] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.727 [2024-12-05 20:07:11.920420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.727 [2024-12-05 20:07:11.920463] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:10.727 BaseBdev2 00:14:10.727 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.727 20:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:10.727 20:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:10.727 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.727 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.727 BaseBdev3_malloc 00:14:10.727 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.727 20:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:10.727 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.727 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.727 true 00:14:10.727 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.727 20:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:10.727 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.727 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.727 [2024-12-05 20:07:11.993328] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:10.727 [2024-12-05 20:07:11.993431] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.727 [2024-12-05 20:07:11.993453] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:10.728 [2024-12-05 20:07:11.993464] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.728 [2024-12-05 20:07:11.995714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.728 [2024-12-05 20:07:11.995752] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:10.728 BaseBdev3 00:14:10.728 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.728 20:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:10.728 20:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:10.728 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.728 20:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.728 BaseBdev4_malloc 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.728 true 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.728 [2024-12-05 20:07:12.059934] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:10.728 [2024-12-05 20:07:12.059981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.728 [2024-12-05 20:07:12.059998] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:10.728 [2024-12-05 20:07:12.060007] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.728 [2024-12-05 20:07:12.062170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.728 [2024-12-05 20:07:12.062223] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:10.728 BaseBdev4 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.728 [2024-12-05 20:07:12.071963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.728 [2024-12-05 20:07:12.073725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.728 [2024-12-05 20:07:12.073800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.728 [2024-12-05 20:07:12.073861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:10.728 [2024-12-05 20:07:12.074101] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:10.728 [2024-12-05 20:07:12.074116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:10.728 [2024-12-05 20:07:12.074346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:10.728 [2024-12-05 20:07:12.074524] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:10.728 [2024-12-05 20:07:12.074532] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:10.728 [2024-12-05 20:07:12.074675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.728 "name": "raid_bdev1", 00:14:10.728 "uuid": "556b08f7-2b65-493f-bc7a-c4f8e0288355", 00:14:10.728 "strip_size_kb": 0, 00:14:10.728 "state": "online", 00:14:10.728 "raid_level": "raid1", 00:14:10.728 "superblock": true, 00:14:10.728 "num_base_bdevs": 4, 00:14:10.728 "num_base_bdevs_discovered": 4, 00:14:10.728 "num_base_bdevs_operational": 4, 00:14:10.728 "base_bdevs_list": [ 00:14:10.728 { 00:14:10.728 "name": "BaseBdev1", 00:14:10.728 "uuid": "961d91f1-89f7-550b-90b4-394f1237fa6d", 00:14:10.728 "is_configured": true, 00:14:10.728 "data_offset": 2048, 00:14:10.728 "data_size": 63488 00:14:10.728 }, 00:14:10.728 { 00:14:10.728 "name": "BaseBdev2", 00:14:10.728 "uuid": "d92456af-8811-5315-8b97-e1b596bf78f8", 00:14:10.728 "is_configured": true, 00:14:10.728 "data_offset": 2048, 00:14:10.728 "data_size": 63488 00:14:10.728 }, 00:14:10.728 { 00:14:10.728 "name": "BaseBdev3", 00:14:10.728 "uuid": "e4f8e2aa-b23d-5ff4-8ef3-44e7ab7da89e", 00:14:10.728 "is_configured": true, 00:14:10.728 "data_offset": 2048, 00:14:10.728 "data_size": 63488 00:14:10.728 }, 00:14:10.728 { 00:14:10.728 "name": "BaseBdev4", 00:14:10.728 "uuid": "fc99d88c-0595-556c-b1fa-a7890b9c8997", 00:14:10.728 "is_configured": true, 00:14:10.728 "data_offset": 2048, 00:14:10.728 "data_size": 63488 00:14:10.728 } 00:14:10.728 ] 00:14:10.728 }' 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.728 20:07:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.298 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:11.298 20:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:11.298 [2024-12-05 20:07:12.624350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.237 "name": "raid_bdev1", 00:14:12.237 "uuid": "556b08f7-2b65-493f-bc7a-c4f8e0288355", 00:14:12.237 "strip_size_kb": 0, 00:14:12.237 "state": "online", 00:14:12.237 "raid_level": "raid1", 00:14:12.237 "superblock": true, 00:14:12.237 "num_base_bdevs": 4, 00:14:12.237 "num_base_bdevs_discovered": 4, 00:14:12.237 "num_base_bdevs_operational": 4, 00:14:12.237 "base_bdevs_list": [ 00:14:12.237 { 00:14:12.237 "name": "BaseBdev1", 00:14:12.237 "uuid": "961d91f1-89f7-550b-90b4-394f1237fa6d", 00:14:12.237 "is_configured": true, 00:14:12.237 "data_offset": 2048, 00:14:12.237 "data_size": 63488 00:14:12.237 }, 00:14:12.237 { 00:14:12.237 "name": "BaseBdev2", 00:14:12.237 "uuid": "d92456af-8811-5315-8b97-e1b596bf78f8", 00:14:12.237 "is_configured": true, 00:14:12.237 "data_offset": 2048, 00:14:12.237 "data_size": 63488 00:14:12.237 }, 00:14:12.237 { 00:14:12.237 "name": "BaseBdev3", 00:14:12.237 "uuid": "e4f8e2aa-b23d-5ff4-8ef3-44e7ab7da89e", 00:14:12.237 "is_configured": true, 00:14:12.237 "data_offset": 2048, 00:14:12.237 "data_size": 63488 00:14:12.237 }, 00:14:12.237 { 00:14:12.237 "name": "BaseBdev4", 00:14:12.237 "uuid": "fc99d88c-0595-556c-b1fa-a7890b9c8997", 00:14:12.237 "is_configured": true, 00:14:12.237 "data_offset": 2048, 00:14:12.237 "data_size": 63488 00:14:12.237 } 00:14:12.237 ] 00:14:12.237 }' 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.237 20:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.536 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:12.536 20:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.536 20:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.536 [2024-12-05 20:07:13.956983] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:12.536 [2024-12-05 20:07:13.957076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.536 [2024-12-05 20:07:13.960018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.536 [2024-12-05 20:07:13.960116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.536 [2024-12-05 20:07:13.960279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.536 [2024-12-05 20:07:13.960333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:12.536 { 00:14:12.536 "results": [ 00:14:12.536 { 00:14:12.536 "job": "raid_bdev1", 00:14:12.536 "core_mask": "0x1", 00:14:12.536 "workload": "randrw", 00:14:12.536 "percentage": 50, 00:14:12.536 "status": "finished", 00:14:12.536 "queue_depth": 1, 00:14:12.536 "io_size": 131072, 00:14:12.536 "runtime": 1.333636, 00:14:12.536 "iops": 10106.205891262684, 00:14:12.536 "mibps": 1263.2757364078354, 00:14:12.536 "io_failed": 0, 00:14:12.536 "io_timeout": 0, 00:14:12.536 "avg_latency_us": 96.13805839825665, 00:14:12.536 "min_latency_us": 25.2646288209607, 00:14:12.536 "max_latency_us": 1488.1537117903931 00:14:12.536 } 00:14:12.536 ], 00:14:12.536 "core_count": 1 00:14:12.536 } 00:14:12.536 20:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.536 20:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75144 00:14:12.536 20:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75144 ']' 00:14:12.536 20:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75144 00:14:12.536 20:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:12.536 20:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.795 20:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75144 00:14:12.795 20:07:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.795 20:07:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.795 20:07:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75144' 00:14:12.795 killing process with pid 75144 00:14:12.795 20:07:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75144 00:14:12.795 [2024-12-05 20:07:14.005345] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.795 20:07:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75144 00:14:13.074 [2024-12-05 20:07:14.338052] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:14.452 20:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LYHDNTnxst 00:14:14.452 20:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:14.452 20:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:14.452 20:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:14.452 20:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:14.452 20:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:14.452 20:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:14.452 ************************************ 00:14:14.452 END TEST raid_read_error_test 00:14:14.452 ************************************ 00:14:14.452 20:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:14.452 00:14:14.452 real 0m4.728s 00:14:14.452 user 0m5.558s 00:14:14.452 sys 0m0.562s 00:14:14.452 20:07:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.452 20:07:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.452 20:07:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:14:14.452 20:07:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:14.452 20:07:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.452 20:07:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:14.452 ************************************ 00:14:14.452 START TEST raid_write_error_test 00:14:14.452 ************************************ 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DMShdDPOkk 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75290 00:14:14.452 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:14.453 20:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75290 00:14:14.453 20:07:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75290 ']' 00:14:14.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.453 20:07:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.453 20:07:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.453 20:07:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.453 20:07:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.453 20:07:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.453 [2024-12-05 20:07:15.767279] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:14:14.453 [2024-12-05 20:07:15.767415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75290 ] 00:14:14.712 [2024-12-05 20:07:15.941380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.712 [2024-12-05 20:07:16.060726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.971 [2024-12-05 20:07:16.259316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.971 [2024-12-05 20:07:16.259384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.231 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.231 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:15.231 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:15.231 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:15.231 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.231 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.231 BaseBdev1_malloc 00:14:15.231 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.231 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:15.231 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.231 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.492 true 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.492 [2024-12-05 20:07:16.673263] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:15.492 [2024-12-05 20:07:16.673338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.492 [2024-12-05 20:07:16.673364] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:15.492 [2024-12-05 20:07:16.673376] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.492 [2024-12-05 20:07:16.675890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.492 [2024-12-05 20:07:16.676020] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:15.492 BaseBdev1 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.492 BaseBdev2_malloc 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.492 true 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.492 [2024-12-05 20:07:16.740538] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:15.492 [2024-12-05 20:07:16.740599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.492 [2024-12-05 20:07:16.740619] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:15.492 [2024-12-05 20:07:16.740631] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.492 [2024-12-05 20:07:16.742813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.492 [2024-12-05 20:07:16.742850] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:15.492 BaseBdev2 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.492 BaseBdev3_malloc 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.492 true 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.492 [2024-12-05 20:07:16.818140] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:15.492 [2024-12-05 20:07:16.818194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.492 [2024-12-05 20:07:16.818230] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:15.492 [2024-12-05 20:07:16.818241] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.492 [2024-12-05 20:07:16.820534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.492 [2024-12-05 20:07:16.820653] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:15.492 BaseBdev3 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.492 BaseBdev4_malloc 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.492 true 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.492 [2024-12-05 20:07:16.882019] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:15.492 [2024-12-05 20:07:16.882118] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.492 [2024-12-05 20:07:16.882141] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:15.492 [2024-12-05 20:07:16.882152] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.492 [2024-12-05 20:07:16.884558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.492 [2024-12-05 20:07:16.884602] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:15.492 BaseBdev4 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.492 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:15.493 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.493 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.493 [2024-12-05 20:07:16.894068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.493 [2024-12-05 20:07:16.895996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.493 [2024-12-05 20:07:16.896115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:15.493 [2024-12-05 20:07:16.896201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:15.493 [2024-12-05 20:07:16.896496] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:15.493 [2024-12-05 20:07:16.896554] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:15.493 [2024-12-05 20:07:16.896860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:15.493 [2024-12-05 20:07:16.897123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:15.493 [2024-12-05 20:07:16.897166] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:15.493 [2024-12-05 20:07:16.897375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.493 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.493 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:15.493 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.493 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.493 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.493 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.493 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.493 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.493 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.493 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.493 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.493 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.493 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.493 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.493 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.753 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.753 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.753 "name": "raid_bdev1", 00:14:15.753 "uuid": "4dae6345-dad1-4cec-891b-e67b516703bf", 00:14:15.753 "strip_size_kb": 0, 00:14:15.753 "state": "online", 00:14:15.753 "raid_level": "raid1", 00:14:15.753 "superblock": true, 00:14:15.753 "num_base_bdevs": 4, 00:14:15.753 "num_base_bdevs_discovered": 4, 00:14:15.753 "num_base_bdevs_operational": 4, 00:14:15.753 "base_bdevs_list": [ 00:14:15.753 { 00:14:15.753 "name": "BaseBdev1", 00:14:15.753 "uuid": "8ccaca3c-6bf8-57e2-adcd-e692a733c613", 00:14:15.753 "is_configured": true, 00:14:15.753 "data_offset": 2048, 00:14:15.753 "data_size": 63488 00:14:15.753 }, 00:14:15.753 { 00:14:15.753 "name": "BaseBdev2", 00:14:15.753 "uuid": "8782aff2-53c7-5ecd-aa4c-4f278b441d94", 00:14:15.753 "is_configured": true, 00:14:15.753 "data_offset": 2048, 00:14:15.753 "data_size": 63488 00:14:15.753 }, 00:14:15.753 { 00:14:15.753 "name": "BaseBdev3", 00:14:15.753 "uuid": "c06294cc-4a6b-5a17-a05b-00773aaffd2f", 00:14:15.753 "is_configured": true, 00:14:15.753 "data_offset": 2048, 00:14:15.753 "data_size": 63488 00:14:15.753 }, 00:14:15.753 { 00:14:15.753 "name": "BaseBdev4", 00:14:15.753 "uuid": "fc76043f-bb56-54be-9d3d-15c25ce9cb8d", 00:14:15.753 "is_configured": true, 00:14:15.753 "data_offset": 2048, 00:14:15.753 "data_size": 63488 00:14:15.753 } 00:14:15.753 ] 00:14:15.753 }' 00:14:15.753 20:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.753 20:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.014 20:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:16.014 20:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:16.014 [2024-12-05 20:07:17.426599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.954 [2024-12-05 20:07:18.342686] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:16.954 [2024-12-05 20:07:18.342751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:16.954 [2024-12-05 20:07:18.343002] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.954 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.213 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.213 "name": "raid_bdev1", 00:14:17.213 "uuid": "4dae6345-dad1-4cec-891b-e67b516703bf", 00:14:17.213 "strip_size_kb": 0, 00:14:17.213 "state": "online", 00:14:17.213 "raid_level": "raid1", 00:14:17.213 "superblock": true, 00:14:17.213 "num_base_bdevs": 4, 00:14:17.213 "num_base_bdevs_discovered": 3, 00:14:17.213 "num_base_bdevs_operational": 3, 00:14:17.213 "base_bdevs_list": [ 00:14:17.213 { 00:14:17.213 "name": null, 00:14:17.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.213 "is_configured": false, 00:14:17.213 "data_offset": 0, 00:14:17.213 "data_size": 63488 00:14:17.213 }, 00:14:17.213 { 00:14:17.213 "name": "BaseBdev2", 00:14:17.213 "uuid": "8782aff2-53c7-5ecd-aa4c-4f278b441d94", 00:14:17.213 "is_configured": true, 00:14:17.213 "data_offset": 2048, 00:14:17.213 "data_size": 63488 00:14:17.213 }, 00:14:17.213 { 00:14:17.213 "name": "BaseBdev3", 00:14:17.213 "uuid": "c06294cc-4a6b-5a17-a05b-00773aaffd2f", 00:14:17.213 "is_configured": true, 00:14:17.213 "data_offset": 2048, 00:14:17.213 "data_size": 63488 00:14:17.213 }, 00:14:17.213 { 00:14:17.213 "name": "BaseBdev4", 00:14:17.213 "uuid": "fc76043f-bb56-54be-9d3d-15c25ce9cb8d", 00:14:17.213 "is_configured": true, 00:14:17.213 "data_offset": 2048, 00:14:17.213 "data_size": 63488 00:14:17.213 } 00:14:17.213 ] 00:14:17.213 }' 00:14:17.213 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.213 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.472 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:17.472 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.472 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.472 [2024-12-05 20:07:18.840194] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:17.472 [2024-12-05 20:07:18.840227] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.472 [2024-12-05 20:07:18.842972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.473 [2024-12-05 20:07:18.843018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.473 [2024-12-05 20:07:18.843127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.473 [2024-12-05 20:07:18.843139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:17.473 { 00:14:17.473 "results": [ 00:14:17.473 { 00:14:17.473 "job": "raid_bdev1", 00:14:17.473 "core_mask": "0x1", 00:14:17.473 "workload": "randrw", 00:14:17.473 "percentage": 50, 00:14:17.473 "status": "finished", 00:14:17.473 "queue_depth": 1, 00:14:17.473 "io_size": 131072, 00:14:17.473 "runtime": 1.414455, 00:14:17.473 "iops": 11240.37173328243, 00:14:17.473 "mibps": 1405.0464666603038, 00:14:17.473 "io_failed": 0, 00:14:17.473 "io_timeout": 0, 00:14:17.473 "avg_latency_us": 86.19658054350181, 00:14:17.473 "min_latency_us": 24.146724890829695, 00:14:17.473 "max_latency_us": 1373.6803493449781 00:14:17.473 } 00:14:17.473 ], 00:14:17.473 "core_count": 1 00:14:17.473 } 00:14:17.473 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.473 20:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75290 00:14:17.473 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75290 ']' 00:14:17.473 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75290 00:14:17.473 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:17.473 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.473 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75290 00:14:17.473 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:17.473 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:17.473 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75290' 00:14:17.473 killing process with pid 75290 00:14:17.473 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75290 00:14:17.473 [2024-12-05 20:07:18.887477] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:17.473 20:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75290 00:14:18.042 [2024-12-05 20:07:19.218933] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:19.420 20:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DMShdDPOkk 00:14:19.420 20:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:19.420 20:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:19.420 ************************************ 00:14:19.420 END TEST raid_write_error_test 00:14:19.420 ************************************ 00:14:19.420 20:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:19.420 20:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:19.420 20:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:19.420 20:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:19.420 20:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:19.420 00:14:19.420 real 0m4.790s 00:14:19.420 user 0m5.668s 00:14:19.420 sys 0m0.601s 00:14:19.420 20:07:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.420 20:07:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.420 20:07:20 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:14:19.420 20:07:20 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:19.420 20:07:20 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:14:19.420 20:07:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:19.420 20:07:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.420 20:07:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:19.420 ************************************ 00:14:19.420 START TEST raid_rebuild_test 00:14:19.420 ************************************ 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75433 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75433 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75433 ']' 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.420 20:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.420 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:19.420 Zero copy mechanism will not be used. 00:14:19.420 [2024-12-05 20:07:20.626812] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:14:19.420 [2024-12-05 20:07:20.626957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75433 ] 00:14:19.420 [2024-12-05 20:07:20.802231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.680 [2024-12-05 20:07:20.920354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.999 [2024-12-05 20:07:21.117244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.999 [2024-12-05 20:07:21.117382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.270 BaseBdev1_malloc 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.270 [2024-12-05 20:07:21.521488] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:20.270 [2024-12-05 20:07:21.521550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.270 [2024-12-05 20:07:21.521589] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:20.270 [2024-12-05 20:07:21.521613] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.270 [2024-12-05 20:07:21.523733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.270 [2024-12-05 20:07:21.523774] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:20.270 BaseBdev1 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.270 BaseBdev2_malloc 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.270 [2024-12-05 20:07:21.576147] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:20.270 [2024-12-05 20:07:21.576302] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.270 [2024-12-05 20:07:21.576333] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:20.270 [2024-12-05 20:07:21.576345] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.270 [2024-12-05 20:07:21.578822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.270 [2024-12-05 20:07:21.578865] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:20.270 BaseBdev2 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.270 spare_malloc 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.270 spare_delay 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.270 [2024-12-05 20:07:21.659483] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:20.270 [2024-12-05 20:07:21.659544] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.270 [2024-12-05 20:07:21.659564] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:20.270 [2024-12-05 20:07:21.659574] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.270 [2024-12-05 20:07:21.661770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.270 [2024-12-05 20:07:21.661882] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:20.270 spare 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.270 [2024-12-05 20:07:21.671534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.270 [2024-12-05 20:07:21.673451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.270 [2024-12-05 20:07:21.673548] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:20.270 [2024-12-05 20:07:21.673564] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:20.270 [2024-12-05 20:07:21.673824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:20.270 [2024-12-05 20:07:21.674003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:20.270 [2024-12-05 20:07:21.674015] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:20.270 [2024-12-05 20:07:21.674183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:20.270 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.271 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.271 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.271 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.271 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.271 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.271 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.271 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.271 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.529 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.529 "name": "raid_bdev1", 00:14:20.529 "uuid": "864a9208-c2fa-4db2-b24e-140029b05acd", 00:14:20.529 "strip_size_kb": 0, 00:14:20.529 "state": "online", 00:14:20.529 "raid_level": "raid1", 00:14:20.529 "superblock": false, 00:14:20.529 "num_base_bdevs": 2, 00:14:20.529 "num_base_bdevs_discovered": 2, 00:14:20.529 "num_base_bdevs_operational": 2, 00:14:20.529 "base_bdevs_list": [ 00:14:20.529 { 00:14:20.529 "name": "BaseBdev1", 00:14:20.529 "uuid": "34070642-c280-5254-9aaf-5f706e0fa80b", 00:14:20.529 "is_configured": true, 00:14:20.529 "data_offset": 0, 00:14:20.529 "data_size": 65536 00:14:20.529 }, 00:14:20.529 { 00:14:20.529 "name": "BaseBdev2", 00:14:20.529 "uuid": "4e21ea2c-2643-5edb-a3a7-3c5213282b89", 00:14:20.529 "is_configured": true, 00:14:20.529 "data_offset": 0, 00:14:20.529 "data_size": 65536 00:14:20.529 } 00:14:20.529 ] 00:14:20.529 }' 00:14:20.529 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.529 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.787 [2024-12-05 20:07:22.075149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.787 20:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:21.045 [2024-12-05 20:07:22.370435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:21.045 /dev/nbd0 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.045 1+0 records in 00:14:21.045 1+0 records out 00:14:21.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390664 s, 10.5 MB/s 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:21.045 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:26.318 65536+0 records in 00:14:26.318 65536+0 records out 00:14:26.318 33554432 bytes (34 MB, 32 MiB) copied, 4.74288 s, 7.1 MB/s 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:26.318 [2024-12-05 20:07:27.411660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.318 [2024-12-05 20:07:27.447687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.318 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.319 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.319 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.319 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.319 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.319 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.319 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.319 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.319 "name": "raid_bdev1", 00:14:26.319 "uuid": "864a9208-c2fa-4db2-b24e-140029b05acd", 00:14:26.319 "strip_size_kb": 0, 00:14:26.319 "state": "online", 00:14:26.319 "raid_level": "raid1", 00:14:26.319 "superblock": false, 00:14:26.319 "num_base_bdevs": 2, 00:14:26.319 "num_base_bdevs_discovered": 1, 00:14:26.319 "num_base_bdevs_operational": 1, 00:14:26.319 "base_bdevs_list": [ 00:14:26.319 { 00:14:26.319 "name": null, 00:14:26.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.319 "is_configured": false, 00:14:26.319 "data_offset": 0, 00:14:26.319 "data_size": 65536 00:14:26.319 }, 00:14:26.319 { 00:14:26.319 "name": "BaseBdev2", 00:14:26.319 "uuid": "4e21ea2c-2643-5edb-a3a7-3c5213282b89", 00:14:26.319 "is_configured": true, 00:14:26.319 "data_offset": 0, 00:14:26.319 "data_size": 65536 00:14:26.319 } 00:14:26.319 ] 00:14:26.319 }' 00:14:26.319 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.319 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.579 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.579 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.579 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.579 [2024-12-05 20:07:27.910972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.579 [2024-12-05 20:07:27.929079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:14:26.579 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.579 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:26.579 [2024-12-05 20:07:27.930939] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:27.519 20:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.519 20:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.519 20:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.519 20:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.519 20:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.519 20:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.519 20:07:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.519 20:07:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.519 20:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.519 20:07:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.778 20:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.778 "name": "raid_bdev1", 00:14:27.778 "uuid": "864a9208-c2fa-4db2-b24e-140029b05acd", 00:14:27.778 "strip_size_kb": 0, 00:14:27.778 "state": "online", 00:14:27.778 "raid_level": "raid1", 00:14:27.778 "superblock": false, 00:14:27.778 "num_base_bdevs": 2, 00:14:27.778 "num_base_bdevs_discovered": 2, 00:14:27.778 "num_base_bdevs_operational": 2, 00:14:27.778 "process": { 00:14:27.778 "type": "rebuild", 00:14:27.778 "target": "spare", 00:14:27.778 "progress": { 00:14:27.778 "blocks": 20480, 00:14:27.778 "percent": 31 00:14:27.778 } 00:14:27.778 }, 00:14:27.778 "base_bdevs_list": [ 00:14:27.778 { 00:14:27.778 "name": "spare", 00:14:27.778 "uuid": "3296870f-cb87-5802-b744-aa5558949ef6", 00:14:27.778 "is_configured": true, 00:14:27.778 "data_offset": 0, 00:14:27.778 "data_size": 65536 00:14:27.778 }, 00:14:27.779 { 00:14:27.779 "name": "BaseBdev2", 00:14:27.779 "uuid": "4e21ea2c-2643-5edb-a3a7-3c5213282b89", 00:14:27.779 "is_configured": true, 00:14:27.779 "data_offset": 0, 00:14:27.779 "data_size": 65536 00:14:27.779 } 00:14:27.779 ] 00:14:27.779 }' 00:14:27.779 20:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.779 [2024-12-05 20:07:29.066334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.779 [2024-12-05 20:07:29.136755] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:27.779 [2024-12-05 20:07:29.136963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.779 [2024-12-05 20:07:29.136984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.779 [2024-12-05 20:07:29.136996] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.779 20:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.037 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.037 "name": "raid_bdev1", 00:14:28.037 "uuid": "864a9208-c2fa-4db2-b24e-140029b05acd", 00:14:28.037 "strip_size_kb": 0, 00:14:28.038 "state": "online", 00:14:28.038 "raid_level": "raid1", 00:14:28.038 "superblock": false, 00:14:28.038 "num_base_bdevs": 2, 00:14:28.038 "num_base_bdevs_discovered": 1, 00:14:28.038 "num_base_bdevs_operational": 1, 00:14:28.038 "base_bdevs_list": [ 00:14:28.038 { 00:14:28.038 "name": null, 00:14:28.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.038 "is_configured": false, 00:14:28.038 "data_offset": 0, 00:14:28.038 "data_size": 65536 00:14:28.038 }, 00:14:28.038 { 00:14:28.038 "name": "BaseBdev2", 00:14:28.038 "uuid": "4e21ea2c-2643-5edb-a3a7-3c5213282b89", 00:14:28.038 "is_configured": true, 00:14:28.038 "data_offset": 0, 00:14:28.038 "data_size": 65536 00:14:28.038 } 00:14:28.038 ] 00:14:28.038 }' 00:14:28.038 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.038 20:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.298 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:28.298 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.298 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:28.298 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:28.298 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.298 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.298 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.298 20:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.298 20:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.298 20:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.298 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.298 "name": "raid_bdev1", 00:14:28.298 "uuid": "864a9208-c2fa-4db2-b24e-140029b05acd", 00:14:28.298 "strip_size_kb": 0, 00:14:28.298 "state": "online", 00:14:28.298 "raid_level": "raid1", 00:14:28.298 "superblock": false, 00:14:28.298 "num_base_bdevs": 2, 00:14:28.298 "num_base_bdevs_discovered": 1, 00:14:28.298 "num_base_bdevs_operational": 1, 00:14:28.298 "base_bdevs_list": [ 00:14:28.298 { 00:14:28.298 "name": null, 00:14:28.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.298 "is_configured": false, 00:14:28.298 "data_offset": 0, 00:14:28.298 "data_size": 65536 00:14:28.298 }, 00:14:28.298 { 00:14:28.298 "name": "BaseBdev2", 00:14:28.298 "uuid": "4e21ea2c-2643-5edb-a3a7-3c5213282b89", 00:14:28.298 "is_configured": true, 00:14:28.298 "data_offset": 0, 00:14:28.298 "data_size": 65536 00:14:28.298 } 00:14:28.298 ] 00:14:28.298 }' 00:14:28.298 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.298 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:28.298 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.613 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:28.613 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:28.613 20:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.613 20:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.613 [2024-12-05 20:07:29.773170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.613 [2024-12-05 20:07:29.792645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:14:28.613 20:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.613 20:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:28.613 [2024-12-05 20:07:29.794823] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.552 "name": "raid_bdev1", 00:14:29.552 "uuid": "864a9208-c2fa-4db2-b24e-140029b05acd", 00:14:29.552 "strip_size_kb": 0, 00:14:29.552 "state": "online", 00:14:29.552 "raid_level": "raid1", 00:14:29.552 "superblock": false, 00:14:29.552 "num_base_bdevs": 2, 00:14:29.552 "num_base_bdevs_discovered": 2, 00:14:29.552 "num_base_bdevs_operational": 2, 00:14:29.552 "process": { 00:14:29.552 "type": "rebuild", 00:14:29.552 "target": "spare", 00:14:29.552 "progress": { 00:14:29.552 "blocks": 20480, 00:14:29.552 "percent": 31 00:14:29.552 } 00:14:29.552 }, 00:14:29.552 "base_bdevs_list": [ 00:14:29.552 { 00:14:29.552 "name": "spare", 00:14:29.552 "uuid": "3296870f-cb87-5802-b744-aa5558949ef6", 00:14:29.552 "is_configured": true, 00:14:29.552 "data_offset": 0, 00:14:29.552 "data_size": 65536 00:14:29.552 }, 00:14:29.552 { 00:14:29.552 "name": "BaseBdev2", 00:14:29.552 "uuid": "4e21ea2c-2643-5edb-a3a7-3c5213282b89", 00:14:29.552 "is_configured": true, 00:14:29.552 "data_offset": 0, 00:14:29.552 "data_size": 65536 00:14:29.552 } 00:14:29.552 ] 00:14:29.552 }' 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=372 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.552 20:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.811 20:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.811 "name": "raid_bdev1", 00:14:29.811 "uuid": "864a9208-c2fa-4db2-b24e-140029b05acd", 00:14:29.811 "strip_size_kb": 0, 00:14:29.811 "state": "online", 00:14:29.811 "raid_level": "raid1", 00:14:29.811 "superblock": false, 00:14:29.811 "num_base_bdevs": 2, 00:14:29.811 "num_base_bdevs_discovered": 2, 00:14:29.811 "num_base_bdevs_operational": 2, 00:14:29.811 "process": { 00:14:29.811 "type": "rebuild", 00:14:29.811 "target": "spare", 00:14:29.811 "progress": { 00:14:29.811 "blocks": 22528, 00:14:29.811 "percent": 34 00:14:29.811 } 00:14:29.811 }, 00:14:29.811 "base_bdevs_list": [ 00:14:29.811 { 00:14:29.811 "name": "spare", 00:14:29.811 "uuid": "3296870f-cb87-5802-b744-aa5558949ef6", 00:14:29.811 "is_configured": true, 00:14:29.811 "data_offset": 0, 00:14:29.811 "data_size": 65536 00:14:29.811 }, 00:14:29.811 { 00:14:29.811 "name": "BaseBdev2", 00:14:29.811 "uuid": "4e21ea2c-2643-5edb-a3a7-3c5213282b89", 00:14:29.811 "is_configured": true, 00:14:29.811 "data_offset": 0, 00:14:29.811 "data_size": 65536 00:14:29.811 } 00:14:29.811 ] 00:14:29.811 }' 00:14:29.811 20:07:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.811 20:07:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.811 20:07:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.811 20:07:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.811 20:07:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.749 20:07:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.749 20:07:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.749 20:07:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.749 20:07:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.749 20:07:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.749 20:07:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.749 20:07:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.749 20:07:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.749 20:07:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.749 20:07:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.749 20:07:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.749 20:07:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.749 "name": "raid_bdev1", 00:14:30.749 "uuid": "864a9208-c2fa-4db2-b24e-140029b05acd", 00:14:30.749 "strip_size_kb": 0, 00:14:30.749 "state": "online", 00:14:30.749 "raid_level": "raid1", 00:14:30.749 "superblock": false, 00:14:30.749 "num_base_bdevs": 2, 00:14:30.749 "num_base_bdevs_discovered": 2, 00:14:30.749 "num_base_bdevs_operational": 2, 00:14:30.749 "process": { 00:14:30.749 "type": "rebuild", 00:14:30.749 "target": "spare", 00:14:30.749 "progress": { 00:14:30.749 "blocks": 47104, 00:14:30.749 "percent": 71 00:14:30.749 } 00:14:30.749 }, 00:14:30.749 "base_bdevs_list": [ 00:14:30.749 { 00:14:30.749 "name": "spare", 00:14:30.749 "uuid": "3296870f-cb87-5802-b744-aa5558949ef6", 00:14:30.749 "is_configured": true, 00:14:30.749 "data_offset": 0, 00:14:30.749 "data_size": 65536 00:14:30.749 }, 00:14:30.749 { 00:14:30.749 "name": "BaseBdev2", 00:14:30.749 "uuid": "4e21ea2c-2643-5edb-a3a7-3c5213282b89", 00:14:30.749 "is_configured": true, 00:14:30.749 "data_offset": 0, 00:14:30.749 "data_size": 65536 00:14:30.749 } 00:14:30.749 ] 00:14:30.749 }' 00:14:30.749 20:07:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.010 20:07:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.010 20:07:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.010 20:07:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.010 20:07:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:31.579 [2024-12-05 20:07:33.010749] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:31.579 [2024-12-05 20:07:33.010845] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:31.579 [2024-12-05 20:07:33.010910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.839 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.839 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.839 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.839 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.839 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.839 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.839 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.839 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.839 20:07:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.839 20:07:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.098 20:07:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.098 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.098 "name": "raid_bdev1", 00:14:32.098 "uuid": "864a9208-c2fa-4db2-b24e-140029b05acd", 00:14:32.098 "strip_size_kb": 0, 00:14:32.098 "state": "online", 00:14:32.098 "raid_level": "raid1", 00:14:32.098 "superblock": false, 00:14:32.098 "num_base_bdevs": 2, 00:14:32.098 "num_base_bdevs_discovered": 2, 00:14:32.098 "num_base_bdevs_operational": 2, 00:14:32.098 "base_bdevs_list": [ 00:14:32.098 { 00:14:32.098 "name": "spare", 00:14:32.098 "uuid": "3296870f-cb87-5802-b744-aa5558949ef6", 00:14:32.098 "is_configured": true, 00:14:32.098 "data_offset": 0, 00:14:32.098 "data_size": 65536 00:14:32.098 }, 00:14:32.098 { 00:14:32.098 "name": "BaseBdev2", 00:14:32.098 "uuid": "4e21ea2c-2643-5edb-a3a7-3c5213282b89", 00:14:32.098 "is_configured": true, 00:14:32.098 "data_offset": 0, 00:14:32.098 "data_size": 65536 00:14:32.098 } 00:14:32.098 ] 00:14:32.098 }' 00:14:32.098 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.098 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.099 "name": "raid_bdev1", 00:14:32.099 "uuid": "864a9208-c2fa-4db2-b24e-140029b05acd", 00:14:32.099 "strip_size_kb": 0, 00:14:32.099 "state": "online", 00:14:32.099 "raid_level": "raid1", 00:14:32.099 "superblock": false, 00:14:32.099 "num_base_bdevs": 2, 00:14:32.099 "num_base_bdevs_discovered": 2, 00:14:32.099 "num_base_bdevs_operational": 2, 00:14:32.099 "base_bdevs_list": [ 00:14:32.099 { 00:14:32.099 "name": "spare", 00:14:32.099 "uuid": "3296870f-cb87-5802-b744-aa5558949ef6", 00:14:32.099 "is_configured": true, 00:14:32.099 "data_offset": 0, 00:14:32.099 "data_size": 65536 00:14:32.099 }, 00:14:32.099 { 00:14:32.099 "name": "BaseBdev2", 00:14:32.099 "uuid": "4e21ea2c-2643-5edb-a3a7-3c5213282b89", 00:14:32.099 "is_configured": true, 00:14:32.099 "data_offset": 0, 00:14:32.099 "data_size": 65536 00:14:32.099 } 00:14:32.099 ] 00:14:32.099 }' 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.099 20:07:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.359 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.359 "name": "raid_bdev1", 00:14:32.359 "uuid": "864a9208-c2fa-4db2-b24e-140029b05acd", 00:14:32.359 "strip_size_kb": 0, 00:14:32.359 "state": "online", 00:14:32.359 "raid_level": "raid1", 00:14:32.359 "superblock": false, 00:14:32.359 "num_base_bdevs": 2, 00:14:32.359 "num_base_bdevs_discovered": 2, 00:14:32.359 "num_base_bdevs_operational": 2, 00:14:32.359 "base_bdevs_list": [ 00:14:32.359 { 00:14:32.359 "name": "spare", 00:14:32.359 "uuid": "3296870f-cb87-5802-b744-aa5558949ef6", 00:14:32.359 "is_configured": true, 00:14:32.359 "data_offset": 0, 00:14:32.359 "data_size": 65536 00:14:32.359 }, 00:14:32.359 { 00:14:32.359 "name": "BaseBdev2", 00:14:32.359 "uuid": "4e21ea2c-2643-5edb-a3a7-3c5213282b89", 00:14:32.359 "is_configured": true, 00:14:32.359 "data_offset": 0, 00:14:32.359 "data_size": 65536 00:14:32.359 } 00:14:32.359 ] 00:14:32.359 }' 00:14:32.359 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.359 20:07:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.618 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:32.618 20:07:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.618 20:07:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.618 [2024-12-05 20:07:33.949499] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.619 [2024-12-05 20:07:33.949588] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.619 [2024-12-05 20:07:33.949715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.619 [2024-12-05 20:07:33.949829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.619 [2024-12-05 20:07:33.949896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:32.619 20:07:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.619 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.619 20:07:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:32.619 20:07:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.619 20:07:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.619 20:07:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.619 20:07:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:32.619 20:07:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:32.619 20:07:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:32.619 20:07:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:32.619 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.619 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:32.619 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:32.619 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:32.619 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:32.619 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:32.619 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:32.619 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:32.619 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:32.878 /dev/nbd0 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.878 1+0 records in 00:14:32.878 1+0 records out 00:14:32.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293151 s, 14.0 MB/s 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:32.878 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:33.138 /dev/nbd1 00:14:33.138 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:33.138 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:33.138 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:33.138 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:33.138 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:33.138 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:33.138 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:33.138 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:33.138 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:33.138 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:33.138 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.138 1+0 records in 00:14:33.138 1+0 records out 00:14:33.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552582 s, 7.4 MB/s 00:14:33.138 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.138 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:33.138 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.399 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:33.399 20:07:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:33.399 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:33.399 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:33.399 20:07:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:33.399 20:07:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:33.399 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:33.399 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:33.399 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:33.399 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:33.399 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.399 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:33.663 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:33.663 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:33.663 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:33.663 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.663 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.663 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:33.663 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:33.663 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.663 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.663 20:07:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75433 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75433 ']' 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75433 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75433 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75433' 00:14:33.983 killing process with pid 75433 00:14:33.983 Received shutdown signal, test time was about 60.000000 seconds 00:14:33.983 00:14:33.983 Latency(us) 00:14:33.983 [2024-12-05T20:07:35.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.983 [2024-12-05T20:07:35.420Z] =================================================================================================================== 00:14:33.983 [2024-12-05T20:07:35.420Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75433 00:14:33.983 [2024-12-05 20:07:35.205642] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.983 20:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75433 00:14:34.241 [2024-12-05 20:07:35.505322] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:35.622 20:07:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:35.622 00:14:35.622 real 0m16.102s 00:14:35.622 user 0m18.165s 00:14:35.622 sys 0m3.088s 00:14:35.622 20:07:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.622 ************************************ 00:14:35.622 END TEST raid_rebuild_test 00:14:35.622 ************************************ 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.623 20:07:36 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:14:35.623 20:07:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:35.623 20:07:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.623 20:07:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:35.623 ************************************ 00:14:35.623 START TEST raid_rebuild_test_sb 00:14:35.623 ************************************ 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75857 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75857 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75857 ']' 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.623 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.623 [2024-12-05 20:07:36.779523] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:14:35.623 [2024-12-05 20:07:36.779741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:35.623 Zero copy mechanism will not be used. 00:14:35.623 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75857 ] 00:14:35.623 [2024-12-05 20:07:36.930061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.623 [2024-12-05 20:07:37.043855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.882 [2024-12-05 20:07:37.242050] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.882 [2024-12-05 20:07:37.242205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.449 BaseBdev1_malloc 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.449 [2024-12-05 20:07:37.671720] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:36.449 [2024-12-05 20:07:37.671798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.449 [2024-12-05 20:07:37.671821] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:36.449 [2024-12-05 20:07:37.671832] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.449 [2024-12-05 20:07:37.674106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.449 [2024-12-05 20:07:37.674201] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:36.449 BaseBdev1 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.449 BaseBdev2_malloc 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.449 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.450 [2024-12-05 20:07:37.727530] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:36.450 [2024-12-05 20:07:37.727669] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.450 [2024-12-05 20:07:37.727698] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:36.450 [2024-12-05 20:07:37.727711] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.450 [2024-12-05 20:07:37.730086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.450 [2024-12-05 20:07:37.730127] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:36.450 BaseBdev2 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.450 spare_malloc 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.450 spare_delay 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.450 [2024-12-05 20:07:37.814714] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:36.450 [2024-12-05 20:07:37.814790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.450 [2024-12-05 20:07:37.814810] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:36.450 [2024-12-05 20:07:37.814821] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.450 [2024-12-05 20:07:37.816980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.450 [2024-12-05 20:07:37.817020] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:36.450 spare 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.450 [2024-12-05 20:07:37.826745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.450 [2024-12-05 20:07:37.828474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.450 [2024-12-05 20:07:37.828715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:36.450 [2024-12-05 20:07:37.828734] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:36.450 [2024-12-05 20:07:37.828986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:36.450 [2024-12-05 20:07:37.829142] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:36.450 [2024-12-05 20:07:37.829151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:36.450 [2024-12-05 20:07:37.829294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.450 "name": "raid_bdev1", 00:14:36.450 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:36.450 "strip_size_kb": 0, 00:14:36.450 "state": "online", 00:14:36.450 "raid_level": "raid1", 00:14:36.450 "superblock": true, 00:14:36.450 "num_base_bdevs": 2, 00:14:36.450 "num_base_bdevs_discovered": 2, 00:14:36.450 "num_base_bdevs_operational": 2, 00:14:36.450 "base_bdevs_list": [ 00:14:36.450 { 00:14:36.450 "name": "BaseBdev1", 00:14:36.450 "uuid": "b0686304-49d2-5804-bb96-c9c632e82930", 00:14:36.450 "is_configured": true, 00:14:36.450 "data_offset": 2048, 00:14:36.450 "data_size": 63488 00:14:36.450 }, 00:14:36.450 { 00:14:36.450 "name": "BaseBdev2", 00:14:36.450 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:36.450 "is_configured": true, 00:14:36.450 "data_offset": 2048, 00:14:36.450 "data_size": 63488 00:14:36.450 } 00:14:36.450 ] 00:14:36.450 }' 00:14:36.450 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.708 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:36.967 [2024-12-05 20:07:38.266314] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.967 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:37.241 [2024-12-05 20:07:38.557593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:37.241 /dev/nbd0 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.241 1+0 records in 00:14:37.241 1+0 records out 00:14:37.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399871 s, 10.2 MB/s 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:37.241 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:41.436 63488+0 records in 00:14:41.436 63488+0 records out 00:14:41.436 32505856 bytes (33 MB, 31 MiB) copied, 4.20872 s, 7.7 MB/s 00:14:41.436 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:41.436 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:41.436 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:41.436 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:41.436 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:41.436 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.436 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:41.695 [2024-12-05 20:07:43.021592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.695 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:41.695 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:41.695 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:41.695 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.695 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.695 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:41.695 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:41.695 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.695 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:41.695 20:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.695 20:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.695 [2024-12-05 20:07:43.057591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:41.695 20:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.695 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:41.696 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.696 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.696 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.696 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.696 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:41.696 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.696 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.696 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.696 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.696 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.696 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.696 20:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.696 20:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.696 20:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.696 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.696 "name": "raid_bdev1", 00:14:41.696 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:41.696 "strip_size_kb": 0, 00:14:41.696 "state": "online", 00:14:41.696 "raid_level": "raid1", 00:14:41.696 "superblock": true, 00:14:41.696 "num_base_bdevs": 2, 00:14:41.696 "num_base_bdevs_discovered": 1, 00:14:41.696 "num_base_bdevs_operational": 1, 00:14:41.696 "base_bdevs_list": [ 00:14:41.696 { 00:14:41.696 "name": null, 00:14:41.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.696 "is_configured": false, 00:14:41.696 "data_offset": 0, 00:14:41.696 "data_size": 63488 00:14:41.696 }, 00:14:41.696 { 00:14:41.696 "name": "BaseBdev2", 00:14:41.696 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:41.696 "is_configured": true, 00:14:41.696 "data_offset": 2048, 00:14:41.696 "data_size": 63488 00:14:41.696 } 00:14:41.696 ] 00:14:41.696 }' 00:14:41.696 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.696 20:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.264 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:42.264 20:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.264 20:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.264 [2024-12-05 20:07:43.512883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:42.264 [2024-12-05 20:07:43.531030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:14:42.264 20:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.264 [2024-12-05 20:07:43.533055] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:42.264 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:43.202 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.202 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.202 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.202 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.202 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.202 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.202 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.202 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.202 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.203 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.203 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.203 "name": "raid_bdev1", 00:14:43.203 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:43.203 "strip_size_kb": 0, 00:14:43.203 "state": "online", 00:14:43.203 "raid_level": "raid1", 00:14:43.203 "superblock": true, 00:14:43.203 "num_base_bdevs": 2, 00:14:43.203 "num_base_bdevs_discovered": 2, 00:14:43.203 "num_base_bdevs_operational": 2, 00:14:43.203 "process": { 00:14:43.203 "type": "rebuild", 00:14:43.203 "target": "spare", 00:14:43.203 "progress": { 00:14:43.203 "blocks": 20480, 00:14:43.203 "percent": 32 00:14:43.203 } 00:14:43.203 }, 00:14:43.203 "base_bdevs_list": [ 00:14:43.203 { 00:14:43.203 "name": "spare", 00:14:43.203 "uuid": "e6cac7eb-1424-5a87-a9bb-81c4823916f7", 00:14:43.203 "is_configured": true, 00:14:43.203 "data_offset": 2048, 00:14:43.203 "data_size": 63488 00:14:43.203 }, 00:14:43.203 { 00:14:43.203 "name": "BaseBdev2", 00:14:43.203 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:43.203 "is_configured": true, 00:14:43.203 "data_offset": 2048, 00:14:43.203 "data_size": 63488 00:14:43.203 } 00:14:43.203 ] 00:14:43.203 }' 00:14:43.203 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.462 [2024-12-05 20:07:44.676668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.462 [2024-12-05 20:07:44.738959] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:43.462 [2024-12-05 20:07:44.739022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.462 [2024-12-05 20:07:44.739038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.462 [2024-12-05 20:07:44.739050] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.462 "name": "raid_bdev1", 00:14:43.462 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:43.462 "strip_size_kb": 0, 00:14:43.462 "state": "online", 00:14:43.462 "raid_level": "raid1", 00:14:43.462 "superblock": true, 00:14:43.462 "num_base_bdevs": 2, 00:14:43.462 "num_base_bdevs_discovered": 1, 00:14:43.462 "num_base_bdevs_operational": 1, 00:14:43.462 "base_bdevs_list": [ 00:14:43.462 { 00:14:43.462 "name": null, 00:14:43.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.462 "is_configured": false, 00:14:43.462 "data_offset": 0, 00:14:43.462 "data_size": 63488 00:14:43.462 }, 00:14:43.462 { 00:14:43.462 "name": "BaseBdev2", 00:14:43.462 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:43.462 "is_configured": true, 00:14:43.462 "data_offset": 2048, 00:14:43.462 "data_size": 63488 00:14:43.462 } 00:14:43.462 ] 00:14:43.462 }' 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.462 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.030 "name": "raid_bdev1", 00:14:44.030 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:44.030 "strip_size_kb": 0, 00:14:44.030 "state": "online", 00:14:44.030 "raid_level": "raid1", 00:14:44.030 "superblock": true, 00:14:44.030 "num_base_bdevs": 2, 00:14:44.030 "num_base_bdevs_discovered": 1, 00:14:44.030 "num_base_bdevs_operational": 1, 00:14:44.030 "base_bdevs_list": [ 00:14:44.030 { 00:14:44.030 "name": null, 00:14:44.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.030 "is_configured": false, 00:14:44.030 "data_offset": 0, 00:14:44.030 "data_size": 63488 00:14:44.030 }, 00:14:44.030 { 00:14:44.030 "name": "BaseBdev2", 00:14:44.030 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:44.030 "is_configured": true, 00:14:44.030 "data_offset": 2048, 00:14:44.030 "data_size": 63488 00:14:44.030 } 00:14:44.030 ] 00:14:44.030 }' 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.030 [2024-12-05 20:07:45.314962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:44.030 [2024-12-05 20:07:45.331168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.030 20:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:44.030 [2024-12-05 20:07:45.333112] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:44.965 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.965 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.965 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.965 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.965 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.965 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.965 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.965 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.965 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.965 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.965 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.965 "name": "raid_bdev1", 00:14:44.965 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:44.965 "strip_size_kb": 0, 00:14:44.965 "state": "online", 00:14:44.965 "raid_level": "raid1", 00:14:44.965 "superblock": true, 00:14:44.965 "num_base_bdevs": 2, 00:14:44.965 "num_base_bdevs_discovered": 2, 00:14:44.965 "num_base_bdevs_operational": 2, 00:14:44.965 "process": { 00:14:44.965 "type": "rebuild", 00:14:44.965 "target": "spare", 00:14:44.965 "progress": { 00:14:44.965 "blocks": 20480, 00:14:44.965 "percent": 32 00:14:44.965 } 00:14:44.965 }, 00:14:44.965 "base_bdevs_list": [ 00:14:44.965 { 00:14:44.965 "name": "spare", 00:14:44.965 "uuid": "e6cac7eb-1424-5a87-a9bb-81c4823916f7", 00:14:44.965 "is_configured": true, 00:14:44.965 "data_offset": 2048, 00:14:44.965 "data_size": 63488 00:14:44.965 }, 00:14:44.965 { 00:14:44.965 "name": "BaseBdev2", 00:14:44.965 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:44.965 "is_configured": true, 00:14:44.965 "data_offset": 2048, 00:14:44.965 "data_size": 63488 00:14:44.965 } 00:14:44.965 ] 00:14:44.965 }' 00:14:44.965 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:45.224 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=388 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.224 "name": "raid_bdev1", 00:14:45.224 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:45.224 "strip_size_kb": 0, 00:14:45.224 "state": "online", 00:14:45.224 "raid_level": "raid1", 00:14:45.224 "superblock": true, 00:14:45.224 "num_base_bdevs": 2, 00:14:45.224 "num_base_bdevs_discovered": 2, 00:14:45.224 "num_base_bdevs_operational": 2, 00:14:45.224 "process": { 00:14:45.224 "type": "rebuild", 00:14:45.224 "target": "spare", 00:14:45.224 "progress": { 00:14:45.224 "blocks": 22528, 00:14:45.224 "percent": 35 00:14:45.224 } 00:14:45.224 }, 00:14:45.224 "base_bdevs_list": [ 00:14:45.224 { 00:14:45.224 "name": "spare", 00:14:45.224 "uuid": "e6cac7eb-1424-5a87-a9bb-81c4823916f7", 00:14:45.224 "is_configured": true, 00:14:45.224 "data_offset": 2048, 00:14:45.224 "data_size": 63488 00:14:45.224 }, 00:14:45.224 { 00:14:45.224 "name": "BaseBdev2", 00:14:45.224 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:45.224 "is_configured": true, 00:14:45.224 "data_offset": 2048, 00:14:45.224 "data_size": 63488 00:14:45.224 } 00:14:45.224 ] 00:14:45.224 }' 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.224 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:46.159 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:46.159 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.159 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.159 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.159 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.159 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.418 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.418 20:07:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.418 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.418 20:07:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.418 20:07:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.418 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.418 "name": "raid_bdev1", 00:14:46.418 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:46.418 "strip_size_kb": 0, 00:14:46.418 "state": "online", 00:14:46.418 "raid_level": "raid1", 00:14:46.418 "superblock": true, 00:14:46.418 "num_base_bdevs": 2, 00:14:46.418 "num_base_bdevs_discovered": 2, 00:14:46.418 "num_base_bdevs_operational": 2, 00:14:46.418 "process": { 00:14:46.418 "type": "rebuild", 00:14:46.418 "target": "spare", 00:14:46.418 "progress": { 00:14:46.418 "blocks": 45056, 00:14:46.418 "percent": 70 00:14:46.418 } 00:14:46.418 }, 00:14:46.418 "base_bdevs_list": [ 00:14:46.418 { 00:14:46.418 "name": "spare", 00:14:46.418 "uuid": "e6cac7eb-1424-5a87-a9bb-81c4823916f7", 00:14:46.418 "is_configured": true, 00:14:46.418 "data_offset": 2048, 00:14:46.418 "data_size": 63488 00:14:46.418 }, 00:14:46.418 { 00:14:46.418 "name": "BaseBdev2", 00:14:46.418 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:46.418 "is_configured": true, 00:14:46.418 "data_offset": 2048, 00:14:46.418 "data_size": 63488 00:14:46.418 } 00:14:46.418 ] 00:14:46.418 }' 00:14:46.418 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.418 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.418 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.418 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.418 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:47.352 [2024-12-05 20:07:48.447382] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:47.352 [2024-12-05 20:07:48.447531] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:47.352 [2024-12-05 20:07:48.447668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.352 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:47.352 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.352 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.352 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.352 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.352 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.352 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.352 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.352 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.352 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.352 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.352 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.352 "name": "raid_bdev1", 00:14:47.352 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:47.352 "strip_size_kb": 0, 00:14:47.352 "state": "online", 00:14:47.352 "raid_level": "raid1", 00:14:47.352 "superblock": true, 00:14:47.352 "num_base_bdevs": 2, 00:14:47.352 "num_base_bdevs_discovered": 2, 00:14:47.352 "num_base_bdevs_operational": 2, 00:14:47.352 "base_bdevs_list": [ 00:14:47.352 { 00:14:47.352 "name": "spare", 00:14:47.352 "uuid": "e6cac7eb-1424-5a87-a9bb-81c4823916f7", 00:14:47.352 "is_configured": true, 00:14:47.352 "data_offset": 2048, 00:14:47.352 "data_size": 63488 00:14:47.352 }, 00:14:47.352 { 00:14:47.352 "name": "BaseBdev2", 00:14:47.352 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:47.352 "is_configured": true, 00:14:47.352 "data_offset": 2048, 00:14:47.352 "data_size": 63488 00:14:47.352 } 00:14:47.352 ] 00:14:47.352 }' 00:14:47.352 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.633 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:47.633 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.633 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:47.633 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:47.633 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.633 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.633 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.633 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.633 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.633 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.634 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.634 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.634 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.634 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.634 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.634 "name": "raid_bdev1", 00:14:47.634 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:47.634 "strip_size_kb": 0, 00:14:47.634 "state": "online", 00:14:47.634 "raid_level": "raid1", 00:14:47.634 "superblock": true, 00:14:47.634 "num_base_bdevs": 2, 00:14:47.634 "num_base_bdevs_discovered": 2, 00:14:47.634 "num_base_bdevs_operational": 2, 00:14:47.634 "base_bdevs_list": [ 00:14:47.634 { 00:14:47.634 "name": "spare", 00:14:47.634 "uuid": "e6cac7eb-1424-5a87-a9bb-81c4823916f7", 00:14:47.634 "is_configured": true, 00:14:47.634 "data_offset": 2048, 00:14:47.634 "data_size": 63488 00:14:47.634 }, 00:14:47.634 { 00:14:47.634 "name": "BaseBdev2", 00:14:47.634 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:47.634 "is_configured": true, 00:14:47.634 "data_offset": 2048, 00:14:47.634 "data_size": 63488 00:14:47.634 } 00:14:47.634 ] 00:14:47.634 }' 00:14:47.634 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.634 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.634 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.634 "name": "raid_bdev1", 00:14:47.634 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:47.634 "strip_size_kb": 0, 00:14:47.634 "state": "online", 00:14:47.634 "raid_level": "raid1", 00:14:47.634 "superblock": true, 00:14:47.634 "num_base_bdevs": 2, 00:14:47.634 "num_base_bdevs_discovered": 2, 00:14:47.634 "num_base_bdevs_operational": 2, 00:14:47.634 "base_bdevs_list": [ 00:14:47.634 { 00:14:47.634 "name": "spare", 00:14:47.634 "uuid": "e6cac7eb-1424-5a87-a9bb-81c4823916f7", 00:14:47.634 "is_configured": true, 00:14:47.634 "data_offset": 2048, 00:14:47.634 "data_size": 63488 00:14:47.634 }, 00:14:47.634 { 00:14:47.634 "name": "BaseBdev2", 00:14:47.634 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:47.634 "is_configured": true, 00:14:47.634 "data_offset": 2048, 00:14:47.634 "data_size": 63488 00:14:47.634 } 00:14:47.634 ] 00:14:47.634 }' 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.634 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.223 [2024-12-05 20:07:49.409254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:48.223 [2024-12-05 20:07:49.409293] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.223 [2024-12-05 20:07:49.409388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.223 [2024-12-05 20:07:49.409484] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.223 [2024-12-05 20:07:49.409498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:48.223 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:48.481 /dev/nbd0 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:48.481 1+0 records in 00:14:48.481 1+0 records out 00:14:48.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448907 s, 9.1 MB/s 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:48.481 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:48.739 /dev/nbd1 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:48.739 1+0 records in 00:14:48.739 1+0 records out 00:14:48.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413865 s, 9.9 MB/s 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:48.739 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:48.739 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:48.739 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.739 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:48.739 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:48.739 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:48.739 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.739 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:48.997 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:48.997 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:48.997 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:48.997 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.997 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.997 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:48.997 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:48.997 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.997 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.997 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.256 [2024-12-05 20:07:50.619248] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:49.256 [2024-12-05 20:07:50.619328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.256 [2024-12-05 20:07:50.619357] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:49.256 [2024-12-05 20:07:50.619366] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.256 [2024-12-05 20:07:50.621573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.256 [2024-12-05 20:07:50.621660] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:49.256 [2024-12-05 20:07:50.621766] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:49.256 [2024-12-05 20:07:50.621822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.256 [2024-12-05 20:07:50.621981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.256 spare 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.256 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.514 [2024-12-05 20:07:50.721884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:49.514 [2024-12-05 20:07:50.722008] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:49.514 [2024-12-05 20:07:50.722363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:49.515 [2024-12-05 20:07:50.722623] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:49.515 [2024-12-05 20:07:50.722671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:49.515 [2024-12-05 20:07:50.722927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.515 "name": "raid_bdev1", 00:14:49.515 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:49.515 "strip_size_kb": 0, 00:14:49.515 "state": "online", 00:14:49.515 "raid_level": "raid1", 00:14:49.515 "superblock": true, 00:14:49.515 "num_base_bdevs": 2, 00:14:49.515 "num_base_bdevs_discovered": 2, 00:14:49.515 "num_base_bdevs_operational": 2, 00:14:49.515 "base_bdevs_list": [ 00:14:49.515 { 00:14:49.515 "name": "spare", 00:14:49.515 "uuid": "e6cac7eb-1424-5a87-a9bb-81c4823916f7", 00:14:49.515 "is_configured": true, 00:14:49.515 "data_offset": 2048, 00:14:49.515 "data_size": 63488 00:14:49.515 }, 00:14:49.515 { 00:14:49.515 "name": "BaseBdev2", 00:14:49.515 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:49.515 "is_configured": true, 00:14:49.515 "data_offset": 2048, 00:14:49.515 "data_size": 63488 00:14:49.515 } 00:14:49.515 ] 00:14:49.515 }' 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.515 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.773 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.773 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.773 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.774 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.774 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.774 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.774 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.774 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.774 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.032 "name": "raid_bdev1", 00:14:50.032 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:50.032 "strip_size_kb": 0, 00:14:50.032 "state": "online", 00:14:50.032 "raid_level": "raid1", 00:14:50.032 "superblock": true, 00:14:50.032 "num_base_bdevs": 2, 00:14:50.032 "num_base_bdevs_discovered": 2, 00:14:50.032 "num_base_bdevs_operational": 2, 00:14:50.032 "base_bdevs_list": [ 00:14:50.032 { 00:14:50.032 "name": "spare", 00:14:50.032 "uuid": "e6cac7eb-1424-5a87-a9bb-81c4823916f7", 00:14:50.032 "is_configured": true, 00:14:50.032 "data_offset": 2048, 00:14:50.032 "data_size": 63488 00:14:50.032 }, 00:14:50.032 { 00:14:50.032 "name": "BaseBdev2", 00:14:50.032 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:50.032 "is_configured": true, 00:14:50.032 "data_offset": 2048, 00:14:50.032 "data_size": 63488 00:14:50.032 } 00:14:50.032 ] 00:14:50.032 }' 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.032 [2024-12-05 20:07:51.374037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.032 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.033 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:50.033 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.033 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.033 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.033 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.033 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.033 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.033 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.033 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.033 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.033 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.033 "name": "raid_bdev1", 00:14:50.033 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:50.033 "strip_size_kb": 0, 00:14:50.033 "state": "online", 00:14:50.033 "raid_level": "raid1", 00:14:50.033 "superblock": true, 00:14:50.033 "num_base_bdevs": 2, 00:14:50.033 "num_base_bdevs_discovered": 1, 00:14:50.033 "num_base_bdevs_operational": 1, 00:14:50.033 "base_bdevs_list": [ 00:14:50.033 { 00:14:50.033 "name": null, 00:14:50.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.033 "is_configured": false, 00:14:50.033 "data_offset": 0, 00:14:50.033 "data_size": 63488 00:14:50.033 }, 00:14:50.033 { 00:14:50.033 "name": "BaseBdev2", 00:14:50.033 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:50.033 "is_configured": true, 00:14:50.033 "data_offset": 2048, 00:14:50.033 "data_size": 63488 00:14:50.033 } 00:14:50.033 ] 00:14:50.033 }' 00:14:50.033 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.033 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.601 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:50.601 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.601 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.601 [2024-12-05 20:07:51.801378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:50.601 [2024-12-05 20:07:51.801698] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:50.601 [2024-12-05 20:07:51.801782] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:50.601 [2024-12-05 20:07:51.801849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:50.601 [2024-12-05 20:07:51.818308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:14:50.601 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.601 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:50.601 [2024-12-05 20:07:51.820502] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.539 "name": "raid_bdev1", 00:14:51.539 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:51.539 "strip_size_kb": 0, 00:14:51.539 "state": "online", 00:14:51.539 "raid_level": "raid1", 00:14:51.539 "superblock": true, 00:14:51.539 "num_base_bdevs": 2, 00:14:51.539 "num_base_bdevs_discovered": 2, 00:14:51.539 "num_base_bdevs_operational": 2, 00:14:51.539 "process": { 00:14:51.539 "type": "rebuild", 00:14:51.539 "target": "spare", 00:14:51.539 "progress": { 00:14:51.539 "blocks": 20480, 00:14:51.539 "percent": 32 00:14:51.539 } 00:14:51.539 }, 00:14:51.539 "base_bdevs_list": [ 00:14:51.539 { 00:14:51.539 "name": "spare", 00:14:51.539 "uuid": "e6cac7eb-1424-5a87-a9bb-81c4823916f7", 00:14:51.539 "is_configured": true, 00:14:51.539 "data_offset": 2048, 00:14:51.539 "data_size": 63488 00:14:51.539 }, 00:14:51.539 { 00:14:51.539 "name": "BaseBdev2", 00:14:51.539 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:51.539 "is_configured": true, 00:14:51.539 "data_offset": 2048, 00:14:51.539 "data_size": 63488 00:14:51.539 } 00:14:51.539 ] 00:14:51.539 }' 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.539 20:07:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.539 [2024-12-05 20:07:52.959616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.817 [2024-12-05 20:07:53.026524] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:51.817 [2024-12-05 20:07:53.026660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.817 [2024-12-05 20:07:53.026698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.817 [2024-12-05 20:07:53.026722] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.817 "name": "raid_bdev1", 00:14:51.817 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:51.817 "strip_size_kb": 0, 00:14:51.817 "state": "online", 00:14:51.817 "raid_level": "raid1", 00:14:51.817 "superblock": true, 00:14:51.817 "num_base_bdevs": 2, 00:14:51.817 "num_base_bdevs_discovered": 1, 00:14:51.817 "num_base_bdevs_operational": 1, 00:14:51.817 "base_bdevs_list": [ 00:14:51.817 { 00:14:51.817 "name": null, 00:14:51.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.817 "is_configured": false, 00:14:51.817 "data_offset": 0, 00:14:51.817 "data_size": 63488 00:14:51.817 }, 00:14:51.817 { 00:14:51.817 "name": "BaseBdev2", 00:14:51.817 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:51.817 "is_configured": true, 00:14:51.817 "data_offset": 2048, 00:14:51.817 "data_size": 63488 00:14:51.817 } 00:14:51.817 ] 00:14:51.817 }' 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.817 20:07:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.077 20:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:52.077 20:07:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.077 20:07:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.077 [2024-12-05 20:07:53.484964] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:52.077 [2024-12-05 20:07:53.485104] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.077 [2024-12-05 20:07:53.485148] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:52.077 [2024-12-05 20:07:53.485187] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.077 [2024-12-05 20:07:53.485751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.077 [2024-12-05 20:07:53.485823] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:52.077 [2024-12-05 20:07:53.485986] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:52.077 [2024-12-05 20:07:53.486033] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:52.077 [2024-12-05 20:07:53.486074] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:52.077 [2024-12-05 20:07:53.486144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.077 [2024-12-05 20:07:53.502788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:52.077 spare 00:14:52.077 20:07:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.077 [2024-12-05 20:07:53.504762] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:52.077 20:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.456 "name": "raid_bdev1", 00:14:53.456 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:53.456 "strip_size_kb": 0, 00:14:53.456 "state": "online", 00:14:53.456 "raid_level": "raid1", 00:14:53.456 "superblock": true, 00:14:53.456 "num_base_bdevs": 2, 00:14:53.456 "num_base_bdevs_discovered": 2, 00:14:53.456 "num_base_bdevs_operational": 2, 00:14:53.456 "process": { 00:14:53.456 "type": "rebuild", 00:14:53.456 "target": "spare", 00:14:53.456 "progress": { 00:14:53.456 "blocks": 20480, 00:14:53.456 "percent": 32 00:14:53.456 } 00:14:53.456 }, 00:14:53.456 "base_bdevs_list": [ 00:14:53.456 { 00:14:53.456 "name": "spare", 00:14:53.456 "uuid": "e6cac7eb-1424-5a87-a9bb-81c4823916f7", 00:14:53.456 "is_configured": true, 00:14:53.456 "data_offset": 2048, 00:14:53.456 "data_size": 63488 00:14:53.456 }, 00:14:53.456 { 00:14:53.456 "name": "BaseBdev2", 00:14:53.456 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:53.456 "is_configured": true, 00:14:53.456 "data_offset": 2048, 00:14:53.456 "data_size": 63488 00:14:53.456 } 00:14:53.456 ] 00:14:53.456 }' 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.456 [2024-12-05 20:07:54.668785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.456 [2024-12-05 20:07:54.710195] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:53.456 [2024-12-05 20:07:54.710272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.456 [2024-12-05 20:07:54.710290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.456 [2024-12-05 20:07:54.710298] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.456 "name": "raid_bdev1", 00:14:53.456 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:53.456 "strip_size_kb": 0, 00:14:53.456 "state": "online", 00:14:53.456 "raid_level": "raid1", 00:14:53.456 "superblock": true, 00:14:53.456 "num_base_bdevs": 2, 00:14:53.456 "num_base_bdevs_discovered": 1, 00:14:53.456 "num_base_bdevs_operational": 1, 00:14:53.456 "base_bdevs_list": [ 00:14:53.456 { 00:14:53.456 "name": null, 00:14:53.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.456 "is_configured": false, 00:14:53.456 "data_offset": 0, 00:14:53.456 "data_size": 63488 00:14:53.456 }, 00:14:53.456 { 00:14:53.456 "name": "BaseBdev2", 00:14:53.456 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:53.456 "is_configured": true, 00:14:53.456 "data_offset": 2048, 00:14:53.456 "data_size": 63488 00:14:53.456 } 00:14:53.456 ] 00:14:53.456 }' 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.456 20:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.024 "name": "raid_bdev1", 00:14:54.024 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:54.024 "strip_size_kb": 0, 00:14:54.024 "state": "online", 00:14:54.024 "raid_level": "raid1", 00:14:54.024 "superblock": true, 00:14:54.024 "num_base_bdevs": 2, 00:14:54.024 "num_base_bdevs_discovered": 1, 00:14:54.024 "num_base_bdevs_operational": 1, 00:14:54.024 "base_bdevs_list": [ 00:14:54.024 { 00:14:54.024 "name": null, 00:14:54.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.024 "is_configured": false, 00:14:54.024 "data_offset": 0, 00:14:54.024 "data_size": 63488 00:14:54.024 }, 00:14:54.024 { 00:14:54.024 "name": "BaseBdev2", 00:14:54.024 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:54.024 "is_configured": true, 00:14:54.024 "data_offset": 2048, 00:14:54.024 "data_size": 63488 00:14:54.024 } 00:14:54.024 ] 00:14:54.024 }' 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.024 [2024-12-05 20:07:55.337810] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:54.024 [2024-12-05 20:07:55.337872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.024 [2024-12-05 20:07:55.337942] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:54.024 [2024-12-05 20:07:55.337960] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.024 [2024-12-05 20:07:55.338423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.024 [2024-12-05 20:07:55.338453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:54.024 [2024-12-05 20:07:55.338540] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:54.024 [2024-12-05 20:07:55.338560] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:54.024 [2024-12-05 20:07:55.338569] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:54.024 [2024-12-05 20:07:55.338579] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:54.024 BaseBdev1 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.024 20:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:54.959 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:54.959 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.959 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.959 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.959 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.959 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:54.959 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.959 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.959 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.959 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.959 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.959 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.959 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.959 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.959 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.217 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.217 "name": "raid_bdev1", 00:14:55.217 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:55.217 "strip_size_kb": 0, 00:14:55.217 "state": "online", 00:14:55.217 "raid_level": "raid1", 00:14:55.217 "superblock": true, 00:14:55.217 "num_base_bdevs": 2, 00:14:55.217 "num_base_bdevs_discovered": 1, 00:14:55.217 "num_base_bdevs_operational": 1, 00:14:55.217 "base_bdevs_list": [ 00:14:55.217 { 00:14:55.217 "name": null, 00:14:55.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.217 "is_configured": false, 00:14:55.217 "data_offset": 0, 00:14:55.217 "data_size": 63488 00:14:55.217 }, 00:14:55.217 { 00:14:55.217 "name": "BaseBdev2", 00:14:55.217 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:55.217 "is_configured": true, 00:14:55.217 "data_offset": 2048, 00:14:55.217 "data_size": 63488 00:14:55.217 } 00:14:55.217 ] 00:14:55.217 }' 00:14:55.217 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.217 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.476 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:55.476 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.476 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:55.476 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:55.476 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.476 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.476 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.476 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.476 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.476 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.476 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.476 "name": "raid_bdev1", 00:14:55.476 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:55.476 "strip_size_kb": 0, 00:14:55.476 "state": "online", 00:14:55.476 "raid_level": "raid1", 00:14:55.476 "superblock": true, 00:14:55.476 "num_base_bdevs": 2, 00:14:55.476 "num_base_bdevs_discovered": 1, 00:14:55.476 "num_base_bdevs_operational": 1, 00:14:55.476 "base_bdevs_list": [ 00:14:55.476 { 00:14:55.476 "name": null, 00:14:55.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.476 "is_configured": false, 00:14:55.476 "data_offset": 0, 00:14:55.476 "data_size": 63488 00:14:55.476 }, 00:14:55.476 { 00:14:55.476 "name": "BaseBdev2", 00:14:55.476 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:55.476 "is_configured": true, 00:14:55.476 "data_offset": 2048, 00:14:55.476 "data_size": 63488 00:14:55.476 } 00:14:55.476 ] 00:14:55.476 }' 00:14:55.476 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.476 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:55.476 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.781 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:55.781 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:55.781 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:55.781 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:55.781 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:55.781 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.781 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:55.781 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.781 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:55.781 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.781 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.781 [2024-12-05 20:07:56.939541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.782 [2024-12-05 20:07:56.939723] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:55.782 [2024-12-05 20:07:56.939746] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:55.782 request: 00:14:55.782 { 00:14:55.782 "base_bdev": "BaseBdev1", 00:14:55.782 "raid_bdev": "raid_bdev1", 00:14:55.782 "method": "bdev_raid_add_base_bdev", 00:14:55.782 "req_id": 1 00:14:55.782 } 00:14:55.782 Got JSON-RPC error response 00:14:55.782 response: 00:14:55.782 { 00:14:55.782 "code": -22, 00:14:55.782 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:55.782 } 00:14:55.782 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:55.782 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:55.782 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:55.782 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:55.782 20:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:55.782 20:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:56.738 20:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:56.738 20:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.738 20:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.738 20:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.738 20:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.738 20:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:56.738 20:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.738 20:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.738 20:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.738 20:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.738 20:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.738 20:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.738 20:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.738 20:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.738 20:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.738 20:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.738 "name": "raid_bdev1", 00:14:56.738 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:56.738 "strip_size_kb": 0, 00:14:56.738 "state": "online", 00:14:56.738 "raid_level": "raid1", 00:14:56.738 "superblock": true, 00:14:56.738 "num_base_bdevs": 2, 00:14:56.738 "num_base_bdevs_discovered": 1, 00:14:56.738 "num_base_bdevs_operational": 1, 00:14:56.738 "base_bdevs_list": [ 00:14:56.738 { 00:14:56.738 "name": null, 00:14:56.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.738 "is_configured": false, 00:14:56.738 "data_offset": 0, 00:14:56.738 "data_size": 63488 00:14:56.738 }, 00:14:56.738 { 00:14:56.738 "name": "BaseBdev2", 00:14:56.738 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:56.738 "is_configured": true, 00:14:56.738 "data_offset": 2048, 00:14:56.738 "data_size": 63488 00:14:56.738 } 00:14:56.738 ] 00:14:56.738 }' 00:14:56.738 20:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.738 20:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.997 20:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:56.997 20:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.997 20:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:56.997 20:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:56.997 20:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.997 20:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.997 20:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.997 20:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.997 20:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.997 20:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.997 20:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.997 "name": "raid_bdev1", 00:14:56.997 "uuid": "373e1846-5416-44fd-aee9-117358b4f058", 00:14:56.997 "strip_size_kb": 0, 00:14:56.997 "state": "online", 00:14:56.997 "raid_level": "raid1", 00:14:56.997 "superblock": true, 00:14:56.997 "num_base_bdevs": 2, 00:14:56.997 "num_base_bdevs_discovered": 1, 00:14:56.997 "num_base_bdevs_operational": 1, 00:14:56.997 "base_bdevs_list": [ 00:14:56.997 { 00:14:56.997 "name": null, 00:14:56.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.997 "is_configured": false, 00:14:56.997 "data_offset": 0, 00:14:56.997 "data_size": 63488 00:14:56.997 }, 00:14:56.997 { 00:14:56.997 "name": "BaseBdev2", 00:14:56.997 "uuid": "5f08f6c0-d426-52f6-b22b-26e49fa6e883", 00:14:56.997 "is_configured": true, 00:14:56.997 "data_offset": 2048, 00:14:56.997 "data_size": 63488 00:14:56.997 } 00:14:56.997 ] 00:14:56.997 }' 00:14:56.997 20:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.256 20:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:57.256 20:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.256 20:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:57.256 20:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75857 00:14:57.256 20:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75857 ']' 00:14:57.256 20:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75857 00:14:57.256 20:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:57.256 20:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.256 20:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75857 00:14:57.256 killing process with pid 75857 00:14:57.256 Received shutdown signal, test time was about 60.000000 seconds 00:14:57.256 00:14:57.256 Latency(us) 00:14:57.256 [2024-12-05T20:07:58.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.256 [2024-12-05T20:07:58.693Z] =================================================================================================================== 00:14:57.256 [2024-12-05T20:07:58.693Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:57.256 20:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.256 20:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.256 20:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75857' 00:14:57.256 20:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75857 00:14:57.256 [2024-12-05 20:07:58.566960] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.256 [2024-12-05 20:07:58.567085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.256 [2024-12-05 20:07:58.567135] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.256 [2024-12-05 20:07:58.567152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:57.256 20:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75857 00:14:57.515 [2024-12-05 20:07:58.867922] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.891 20:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:58.891 00:14:58.891 real 0m23.297s 00:14:58.891 user 0m28.350s 00:14:58.891 sys 0m3.592s 00:14:58.891 20:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.891 20:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.891 ************************************ 00:14:58.891 END TEST raid_rebuild_test_sb 00:14:58.891 ************************************ 00:14:58.891 20:08:00 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:14:58.891 20:08:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:58.891 20:08:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.891 20:08:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.891 ************************************ 00:14:58.892 START TEST raid_rebuild_test_io 00:14:58.892 ************************************ 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76587 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76587 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76587 ']' 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.892 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.892 [2024-12-05 20:08:00.160688] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:14:58.892 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:58.892 Zero copy mechanism will not be used. 00:14:58.892 [2024-12-05 20:08:00.160903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76587 ] 00:14:59.150 [2024-12-05 20:08:00.337299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.150 [2024-12-05 20:08:00.453432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.408 [2024-12-05 20:08:00.651626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.408 [2024-12-05 20:08:00.651666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.667 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.667 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:59.667 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:59.667 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:59.667 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.667 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.667 BaseBdev1_malloc 00:14:59.667 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.667 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:59.667 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.667 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.667 [2024-12-05 20:08:01.044014] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:59.667 [2024-12-05 20:08:01.044075] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.667 [2024-12-05 20:08:01.044096] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:59.667 [2024-12-05 20:08:01.044108] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.667 [2024-12-05 20:08:01.046184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.667 [2024-12-05 20:08:01.046226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:59.667 BaseBdev1 00:14:59.667 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.667 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:59.667 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:59.667 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.667 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.667 BaseBdev2_malloc 00:14:59.667 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.667 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:59.667 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.667 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.667 [2024-12-05 20:08:01.100127] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:59.667 [2024-12-05 20:08:01.100195] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.667 [2024-12-05 20:08:01.100248] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:59.667 [2024-12-05 20:08:01.100259] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.667 [2024-12-05 20:08:01.102405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.926 [2024-12-05 20:08:01.102527] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:59.926 BaseBdev2 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.926 spare_malloc 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.926 spare_delay 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.926 [2024-12-05 20:08:01.179591] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:59.926 [2024-12-05 20:08:01.179699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.926 [2024-12-05 20:08:01.179724] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:59.926 [2024-12-05 20:08:01.179735] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.926 [2024-12-05 20:08:01.181984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.926 [2024-12-05 20:08:01.182057] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:59.926 spare 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.926 [2024-12-05 20:08:01.191643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.926 [2024-12-05 20:08:01.193514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.926 [2024-12-05 20:08:01.193606] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:59.926 [2024-12-05 20:08:01.193621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:59.926 [2024-12-05 20:08:01.193877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:59.926 [2024-12-05 20:08:01.194061] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:59.926 [2024-12-05 20:08:01.194073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:59.926 [2024-12-05 20:08:01.194228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.926 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.926 "name": "raid_bdev1", 00:14:59.926 "uuid": "a7051a89-ba06-4970-900f-3ddefb494262", 00:14:59.926 "strip_size_kb": 0, 00:14:59.926 "state": "online", 00:14:59.926 "raid_level": "raid1", 00:14:59.926 "superblock": false, 00:14:59.926 "num_base_bdevs": 2, 00:14:59.927 "num_base_bdevs_discovered": 2, 00:14:59.927 "num_base_bdevs_operational": 2, 00:14:59.927 "base_bdevs_list": [ 00:14:59.927 { 00:14:59.927 "name": "BaseBdev1", 00:14:59.927 "uuid": "85558b9f-b92f-5ab6-bf3a-9f064fe447eb", 00:14:59.927 "is_configured": true, 00:14:59.927 "data_offset": 0, 00:14:59.927 "data_size": 65536 00:14:59.927 }, 00:14:59.927 { 00:14:59.927 "name": "BaseBdev2", 00:14:59.927 "uuid": "dd02bfb0-a997-5d4e-818d-ce28c7f07071", 00:14:59.927 "is_configured": true, 00:14:59.927 "data_offset": 0, 00:14:59.927 "data_size": 65536 00:14:59.927 } 00:14:59.927 ] 00:14:59.927 }' 00:14:59.927 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.927 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.185 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.185 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:00.185 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.185 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.185 [2024-12-05 20:08:01.619167] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.443 [2024-12-05 20:08:01.718671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.443 "name": "raid_bdev1", 00:15:00.443 "uuid": "a7051a89-ba06-4970-900f-3ddefb494262", 00:15:00.443 "strip_size_kb": 0, 00:15:00.443 "state": "online", 00:15:00.443 "raid_level": "raid1", 00:15:00.443 "superblock": false, 00:15:00.443 "num_base_bdevs": 2, 00:15:00.443 "num_base_bdevs_discovered": 1, 00:15:00.443 "num_base_bdevs_operational": 1, 00:15:00.443 "base_bdevs_list": [ 00:15:00.443 { 00:15:00.443 "name": null, 00:15:00.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.443 "is_configured": false, 00:15:00.443 "data_offset": 0, 00:15:00.443 "data_size": 65536 00:15:00.443 }, 00:15:00.443 { 00:15:00.443 "name": "BaseBdev2", 00:15:00.443 "uuid": "dd02bfb0-a997-5d4e-818d-ce28c7f07071", 00:15:00.443 "is_configured": true, 00:15:00.443 "data_offset": 0, 00:15:00.443 "data_size": 65536 00:15:00.443 } 00:15:00.443 ] 00:15:00.443 }' 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.443 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.443 [2024-12-05 20:08:01.818350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:00.443 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:00.443 Zero copy mechanism will not be used. 00:15:00.443 Running I/O for 60 seconds... 00:15:01.013 20:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:01.013 20:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.013 20:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.013 [2024-12-05 20:08:02.183031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.013 20:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.013 20:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:01.013 [2024-12-05 20:08:02.245041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:01.013 [2024-12-05 20:08:02.246827] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:01.013 [2024-12-05 20:08:02.364382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:01.272 [2024-12-05 20:08:02.501508] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:01.531 256.00 IOPS, 768.00 MiB/s [2024-12-05T20:08:02.968Z] [2024-12-05 20:08:02.964838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:01.531 [2024-12-05 20:08:02.965209] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:02.101 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.101 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.101 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.101 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.101 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.101 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.101 20:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.101 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.101 20:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.101 20:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.101 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.101 "name": "raid_bdev1", 00:15:02.101 "uuid": "a7051a89-ba06-4970-900f-3ddefb494262", 00:15:02.101 "strip_size_kb": 0, 00:15:02.101 "state": "online", 00:15:02.101 "raid_level": "raid1", 00:15:02.101 "superblock": false, 00:15:02.101 "num_base_bdevs": 2, 00:15:02.101 "num_base_bdevs_discovered": 2, 00:15:02.101 "num_base_bdevs_operational": 2, 00:15:02.101 "process": { 00:15:02.101 "type": "rebuild", 00:15:02.101 "target": "spare", 00:15:02.101 "progress": { 00:15:02.101 "blocks": 12288, 00:15:02.101 "percent": 18 00:15:02.101 } 00:15:02.101 }, 00:15:02.101 "base_bdevs_list": [ 00:15:02.101 { 00:15:02.101 "name": "spare", 00:15:02.101 "uuid": "6653a93a-c785-51f2-9786-d3d6d4739be6", 00:15:02.101 "is_configured": true, 00:15:02.101 "data_offset": 0, 00:15:02.101 "data_size": 65536 00:15:02.101 }, 00:15:02.101 { 00:15:02.101 "name": "BaseBdev2", 00:15:02.101 "uuid": "dd02bfb0-a997-5d4e-818d-ce28c7f07071", 00:15:02.101 "is_configured": true, 00:15:02.101 "data_offset": 0, 00:15:02.101 "data_size": 65536 00:15:02.101 } 00:15:02.101 ] 00:15:02.101 }' 00:15:02.101 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.101 [2024-12-05 20:08:03.308048] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:02.101 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.102 [2024-12-05 20:08:03.386764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.102 [2024-12-05 20:08:03.411210] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:02.102 [2024-12-05 20:08:03.424089] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:02.102 [2024-12-05 20:08:03.426770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.102 [2024-12-05 20:08:03.426812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.102 [2024-12-05 20:08:03.426825] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:02.102 [2024-12-05 20:08:03.470990] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.102 "name": "raid_bdev1", 00:15:02.102 "uuid": "a7051a89-ba06-4970-900f-3ddefb494262", 00:15:02.102 "strip_size_kb": 0, 00:15:02.102 "state": "online", 00:15:02.102 "raid_level": "raid1", 00:15:02.102 "superblock": false, 00:15:02.102 "num_base_bdevs": 2, 00:15:02.102 "num_base_bdevs_discovered": 1, 00:15:02.102 "num_base_bdevs_operational": 1, 00:15:02.102 "base_bdevs_list": [ 00:15:02.102 { 00:15:02.102 "name": null, 00:15:02.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.102 "is_configured": false, 00:15:02.102 "data_offset": 0, 00:15:02.102 "data_size": 65536 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "name": "BaseBdev2", 00:15:02.102 "uuid": "dd02bfb0-a997-5d4e-818d-ce28c7f07071", 00:15:02.102 "is_configured": true, 00:15:02.102 "data_offset": 0, 00:15:02.102 "data_size": 65536 00:15:02.102 } 00:15:02.102 ] 00:15:02.102 }' 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.102 20:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.670 199.50 IOPS, 598.50 MiB/s [2024-12-05T20:08:04.107Z] 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:02.670 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.670 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:02.670 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:02.670 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.670 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.670 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.670 20:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.670 20:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.670 20:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.670 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.670 "name": "raid_bdev1", 00:15:02.670 "uuid": "a7051a89-ba06-4970-900f-3ddefb494262", 00:15:02.670 "strip_size_kb": 0, 00:15:02.670 "state": "online", 00:15:02.670 "raid_level": "raid1", 00:15:02.670 "superblock": false, 00:15:02.670 "num_base_bdevs": 2, 00:15:02.670 "num_base_bdevs_discovered": 1, 00:15:02.670 "num_base_bdevs_operational": 1, 00:15:02.670 "base_bdevs_list": [ 00:15:02.670 { 00:15:02.670 "name": null, 00:15:02.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.670 "is_configured": false, 00:15:02.670 "data_offset": 0, 00:15:02.670 "data_size": 65536 00:15:02.670 }, 00:15:02.670 { 00:15:02.670 "name": "BaseBdev2", 00:15:02.670 "uuid": "dd02bfb0-a997-5d4e-818d-ce28c7f07071", 00:15:02.670 "is_configured": true, 00:15:02.670 "data_offset": 0, 00:15:02.670 "data_size": 65536 00:15:02.670 } 00:15:02.670 ] 00:15:02.670 }' 00:15:02.670 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.670 20:08:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:02.670 20:08:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.670 20:08:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:02.670 20:08:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:02.670 20:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.670 20:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.670 [2024-12-05 20:08:04.050646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:02.670 20:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.670 20:08:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:02.671 [2024-12-05 20:08:04.089068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:02.671 [2024-12-05 20:08:04.091060] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:02.930 [2024-12-05 20:08:04.211626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:02.930 [2024-12-05 20:08:04.212364] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:02.930 [2024-12-05 20:08:04.333641] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:02.930 [2024-12-05 20:08:04.334106] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:03.497 [2024-12-05 20:08:04.694670] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:03.757 182.67 IOPS, 548.00 MiB/s [2024-12-05T20:08:05.194Z] [2024-12-05 20:08:05.019279] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:03.757 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.757 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.757 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.757 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.757 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.757 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.757 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.757 20:08:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.757 20:08:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.757 20:08:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.757 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.757 "name": "raid_bdev1", 00:15:03.757 "uuid": "a7051a89-ba06-4970-900f-3ddefb494262", 00:15:03.757 "strip_size_kb": 0, 00:15:03.757 "state": "online", 00:15:03.757 "raid_level": "raid1", 00:15:03.757 "superblock": false, 00:15:03.757 "num_base_bdevs": 2, 00:15:03.757 "num_base_bdevs_discovered": 2, 00:15:03.757 "num_base_bdevs_operational": 2, 00:15:03.757 "process": { 00:15:03.757 "type": "rebuild", 00:15:03.757 "target": "spare", 00:15:03.757 "progress": { 00:15:03.757 "blocks": 14336, 00:15:03.757 "percent": 21 00:15:03.757 } 00:15:03.757 }, 00:15:03.757 "base_bdevs_list": [ 00:15:03.757 { 00:15:03.757 "name": "spare", 00:15:03.757 "uuid": "6653a93a-c785-51f2-9786-d3d6d4739be6", 00:15:03.757 "is_configured": true, 00:15:03.757 "data_offset": 0, 00:15:03.757 "data_size": 65536 00:15:03.757 }, 00:15:03.757 { 00:15:03.757 "name": "BaseBdev2", 00:15:03.757 "uuid": "dd02bfb0-a997-5d4e-818d-ce28c7f07071", 00:15:03.757 "is_configured": true, 00:15:03.757 "data_offset": 0, 00:15:03.757 "data_size": 65536 00:15:03.757 } 00:15:03.757 ] 00:15:03.757 }' 00:15:03.757 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.757 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.757 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.015 [2024-12-05 20:08:05.220935] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 1 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.015 2288 offset_end: 18432 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:04.015 [2024-12-05 20:08:05.221349] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 1 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:04.015 2288 offset_end: 18432 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=407 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.015 "name": "raid_bdev1", 00:15:04.015 "uuid": "a7051a89-ba06-4970-900f-3ddefb494262", 00:15:04.015 "strip_size_kb": 0, 00:15:04.015 "state": "online", 00:15:04.015 "raid_level": "raid1", 00:15:04.015 "superblock": false, 00:15:04.015 "num_base_bdevs": 2, 00:15:04.015 "num_base_bdevs_discovered": 2, 00:15:04.015 "num_base_bdevs_operational": 2, 00:15:04.015 "process": { 00:15:04.015 "type": "rebuild", 00:15:04.015 "target": "spare", 00:15:04.015 "progress": { 00:15:04.015 "blocks": 16384, 00:15:04.015 "percent": 25 00:15:04.015 } 00:15:04.015 }, 00:15:04.015 "base_bdevs_list": [ 00:15:04.015 { 00:15:04.015 "name": "spare", 00:15:04.015 "uuid": "6653a93a-c785-51f2-9786-d3d6d4739be6", 00:15:04.015 "is_configured": true, 00:15:04.015 "data_offset": 0, 00:15:04.015 "data_size": 65536 00:15:04.015 }, 00:15:04.015 { 00:15:04.015 "name": "BaseBdev2", 00:15:04.015 "uuid": "dd02bfb0-a997-5d4e-818d-ce28c7f07071", 00:15:04.015 "is_configured": true, 00:15:04.015 "data_offset": 0, 00:15:04.015 "data_size": 65536 00:15:04.015 } 00:15:04.015 ] 00:15:04.015 }' 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.015 20:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:04.275 [2024-12-05 20:08:05.465137] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:04.275 [2024-12-05 20:08:05.667067] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:04.275 [2024-12-05 20:08:05.667501] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:04.536 151.00 IOPS, 453.00 MiB/s [2024-12-05T20:08:05.973Z] [2024-12-05 20:08:05.905453] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:05.102 20:08:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.102 20:08:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.102 20:08:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.102 20:08:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.102 20:08:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.103 20:08:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.103 20:08:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.103 20:08:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.103 20:08:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.103 20:08:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.103 20:08:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.103 20:08:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.103 "name": "raid_bdev1", 00:15:05.103 "uuid": "a7051a89-ba06-4970-900f-3ddefb494262", 00:15:05.103 "strip_size_kb": 0, 00:15:05.103 "state": "online", 00:15:05.103 "raid_level": "raid1", 00:15:05.103 "superblock": false, 00:15:05.103 "num_base_bdevs": 2, 00:15:05.103 "num_base_bdevs_discovered": 2, 00:15:05.103 "num_base_bdevs_operational": 2, 00:15:05.103 "process": { 00:15:05.103 "type": "rebuild", 00:15:05.103 "target": "spare", 00:15:05.103 "progress": { 00:15:05.103 "blocks": 32768, 00:15:05.103 "percent": 50 00:15:05.103 } 00:15:05.103 }, 00:15:05.103 "base_bdevs_list": [ 00:15:05.103 { 00:15:05.103 "name": "spare", 00:15:05.103 "uuid": "6653a93a-c785-51f2-9786-d3d6d4739be6", 00:15:05.103 "is_configured": true, 00:15:05.103 "data_offset": 0, 00:15:05.103 "data_size": 65536 00:15:05.103 }, 00:15:05.103 { 00:15:05.103 "name": "BaseBdev2", 00:15:05.103 "uuid": "dd02bfb0-a997-5d4e-818d-ce28c7f07071", 00:15:05.103 "is_configured": true, 00:15:05.103 "data_offset": 0, 00:15:05.103 "data_size": 65536 00:15:05.103 } 00:15:05.103 ] 00:15:05.103 }' 00:15:05.103 20:08:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.103 20:08:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.103 20:08:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.103 20:08:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.103 20:08:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.930 128.60 IOPS, 385.80 MiB/s [2024-12-05T20:08:07.367Z] [2024-12-05 20:08:07.127660] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:06.189 20:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.189 20:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.189 20:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.189 20:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.189 20:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.189 20:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.189 20:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.189 20:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.189 20:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.189 20:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.189 20:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.189 20:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.189 "name": "raid_bdev1", 00:15:06.189 "uuid": "a7051a89-ba06-4970-900f-3ddefb494262", 00:15:06.189 "strip_size_kb": 0, 00:15:06.189 "state": "online", 00:15:06.189 "raid_level": "raid1", 00:15:06.189 "superblock": false, 00:15:06.189 "num_base_bdevs": 2, 00:15:06.189 "num_base_bdevs_discovered": 2, 00:15:06.189 "num_base_bdevs_operational": 2, 00:15:06.189 "process": { 00:15:06.189 "type": "rebuild", 00:15:06.189 "target": "spare", 00:15:06.189 "progress": { 00:15:06.189 "blocks": 51200, 00:15:06.189 "percent": 78 00:15:06.189 } 00:15:06.189 }, 00:15:06.189 "base_bdevs_list": [ 00:15:06.189 { 00:15:06.189 "name": "spare", 00:15:06.189 "uuid": "6653a93a-c785-51f2-9786-d3d6d4739be6", 00:15:06.189 "is_configured": true, 00:15:06.189 "data_offset": 0, 00:15:06.189 "data_size": 65536 00:15:06.189 }, 00:15:06.189 { 00:15:06.189 "name": "BaseBdev2", 00:15:06.189 "uuid": "dd02bfb0-a997-5d4e-818d-ce28c7f07071", 00:15:06.189 "is_configured": true, 00:15:06.189 "data_offset": 0, 00:15:06.189 "data_size": 65536 00:15:06.189 } 00:15:06.189 ] 00:15:06.189 }' 00:15:06.189 20:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.189 20:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.189 20:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.449 20:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.449 20:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:07.017 113.67 IOPS, 341.00 MiB/s [2024-12-05T20:08:08.454Z] [2024-12-05 20:08:08.219957] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:07.017 [2024-12-05 20:08:08.325170] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:07.017 [2024-12-05 20:08:08.328044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.276 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.276 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.276 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.276 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.276 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.276 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.276 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.276 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.276 20:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.276 20:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.276 20:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.537 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.537 "name": "raid_bdev1", 00:15:07.537 "uuid": "a7051a89-ba06-4970-900f-3ddefb494262", 00:15:07.537 "strip_size_kb": 0, 00:15:07.537 "state": "online", 00:15:07.537 "raid_level": "raid1", 00:15:07.537 "superblock": false, 00:15:07.537 "num_base_bdevs": 2, 00:15:07.537 "num_base_bdevs_discovered": 2, 00:15:07.537 "num_base_bdevs_operational": 2, 00:15:07.537 "base_bdevs_list": [ 00:15:07.537 { 00:15:07.537 "name": "spare", 00:15:07.537 "uuid": "6653a93a-c785-51f2-9786-d3d6d4739be6", 00:15:07.537 "is_configured": true, 00:15:07.537 "data_offset": 0, 00:15:07.537 "data_size": 65536 00:15:07.537 }, 00:15:07.537 { 00:15:07.537 "name": "BaseBdev2", 00:15:07.537 "uuid": "dd02bfb0-a997-5d4e-818d-ce28c7f07071", 00:15:07.537 "is_configured": true, 00:15:07.537 "data_offset": 0, 00:15:07.537 "data_size": 65536 00:15:07.537 } 00:15:07.537 ] 00:15:07.537 }' 00:15:07.537 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.537 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:07.537 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.537 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:07.537 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:07.537 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.537 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.537 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.537 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.537 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.537 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.537 102.00 IOPS, 306.00 MiB/s [2024-12-05T20:08:08.974Z] 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.537 20:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.537 20:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.537 20:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.538 "name": "raid_bdev1", 00:15:07.538 "uuid": "a7051a89-ba06-4970-900f-3ddefb494262", 00:15:07.538 "strip_size_kb": 0, 00:15:07.538 "state": "online", 00:15:07.538 "raid_level": "raid1", 00:15:07.538 "superblock": false, 00:15:07.538 "num_base_bdevs": 2, 00:15:07.538 "num_base_bdevs_discovered": 2, 00:15:07.538 "num_base_bdevs_operational": 2, 00:15:07.538 "base_bdevs_list": [ 00:15:07.538 { 00:15:07.538 "name": "spare", 00:15:07.538 "uuid": "6653a93a-c785-51f2-9786-d3d6d4739be6", 00:15:07.538 "is_configured": true, 00:15:07.538 "data_offset": 0, 00:15:07.538 "data_size": 65536 00:15:07.538 }, 00:15:07.538 { 00:15:07.538 "name": "BaseBdev2", 00:15:07.538 "uuid": "dd02bfb0-a997-5d4e-818d-ce28c7f07071", 00:15:07.538 "is_configured": true, 00:15:07.538 "data_offset": 0, 00:15:07.538 "data_size": 65536 00:15:07.538 } 00:15:07.538 ] 00:15:07.538 }' 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.538 20:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.796 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.796 "name": "raid_bdev1", 00:15:07.796 "uuid": "a7051a89-ba06-4970-900f-3ddefb494262", 00:15:07.796 "strip_size_kb": 0, 00:15:07.796 "state": "online", 00:15:07.796 "raid_level": "raid1", 00:15:07.796 "superblock": false, 00:15:07.796 "num_base_bdevs": 2, 00:15:07.796 "num_base_bdevs_discovered": 2, 00:15:07.796 "num_base_bdevs_operational": 2, 00:15:07.796 "base_bdevs_list": [ 00:15:07.796 { 00:15:07.796 "name": "spare", 00:15:07.796 "uuid": "6653a93a-c785-51f2-9786-d3d6d4739be6", 00:15:07.796 "is_configured": true, 00:15:07.796 "data_offset": 0, 00:15:07.796 "data_size": 65536 00:15:07.796 }, 00:15:07.796 { 00:15:07.796 "name": "BaseBdev2", 00:15:07.796 "uuid": "dd02bfb0-a997-5d4e-818d-ce28c7f07071", 00:15:07.796 "is_configured": true, 00:15:07.796 "data_offset": 0, 00:15:07.796 "data_size": 65536 00:15:07.796 } 00:15:07.796 ] 00:15:07.796 }' 00:15:07.796 20:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.796 20:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.055 [2024-12-05 20:08:09.355169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:08.055 [2024-12-05 20:08:09.355251] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.055 00:15:08.055 Latency(us) 00:15:08.055 [2024-12-05T20:08:09.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.055 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:08.055 raid_bdev1 : 7.60 96.30 288.90 0.00 0.00 14235.98 309.44 114015.47 00:15:08.055 [2024-12-05T20:08:09.492Z] =================================================================================================================== 00:15:08.055 [2024-12-05T20:08:09.492Z] Total : 96.30 288.90 0.00 0.00 14235.98 309.44 114015.47 00:15:08.055 [2024-12-05 20:08:09.429266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.055 [2024-12-05 20:08:09.429386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.055 [2024-12-05 20:08:09.429468] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.055 [2024-12-05 20:08:09.429482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:08.055 { 00:15:08.055 "results": [ 00:15:08.055 { 00:15:08.055 "job": "raid_bdev1", 00:15:08.055 "core_mask": "0x1", 00:15:08.055 "workload": "randrw", 00:15:08.055 "percentage": 50, 00:15:08.055 "status": "finished", 00:15:08.055 "queue_depth": 2, 00:15:08.055 "io_size": 3145728, 00:15:08.055 "runtime": 7.601206, 00:15:08.055 "iops": 96.30050810358252, 00:15:08.055 "mibps": 288.90152431074756, 00:15:08.055 "io_failed": 0, 00:15:08.055 "io_timeout": 0, 00:15:08.055 "avg_latency_us": 14235.979974705897, 00:15:08.055 "min_latency_us": 309.435807860262, 00:15:08.055 "max_latency_us": 114015.46899563319 00:15:08.055 } 00:15:08.055 ], 00:15:08.055 "core_count": 1 00:15:08.055 } 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.055 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:08.315 /dev/nbd0 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.315 1+0 records in 00:15:08.315 1+0 records out 00:15:08.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322301 s, 12.7 MB/s 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.315 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:08.574 /dev/nbd1 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.574 1+0 records in 00:15:08.574 1+0 records out 00:15:08.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270542 s, 15.1 MB/s 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.574 20:08:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:08.833 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:08.833 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.833 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:08.833 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:08.833 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:08.833 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.833 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:09.093 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:09.093 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:09.093 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:09.093 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.093 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.093 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:09.093 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:09.093 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.093 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:09.093 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.093 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:09.093 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:09.093 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:09.093 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.093 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76587 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76587 ']' 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76587 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76587 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:09.353 killing process with pid 76587 00:15:09.353 Received shutdown signal, test time was about 8.788915 seconds 00:15:09.353 00:15:09.353 Latency(us) 00:15:09.353 [2024-12-05T20:08:10.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.353 [2024-12-05T20:08:10.790Z] =================================================================================================================== 00:15:09.353 [2024-12-05T20:08:10.790Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76587' 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76587 00:15:09.353 20:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76587 00:15:09.353 [2024-12-05 20:08:10.592425] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:09.612 [2024-12-05 20:08:10.830007] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:10.992 ************************************ 00:15:10.992 END TEST raid_rebuild_test_io 00:15:10.992 ************************************ 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:10.992 00:15:10.992 real 0m11.989s 00:15:10.992 user 0m15.132s 00:15:10.992 sys 0m1.360s 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.992 20:08:12 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:15:10.992 20:08:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:10.992 20:08:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.992 20:08:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:10.992 ************************************ 00:15:10.992 START TEST raid_rebuild_test_sb_io 00:15:10.992 ************************************ 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76963 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76963 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76963 ']' 00:15:10.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.992 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:10.992 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:10.992 Zero copy mechanism will not be used. 00:15:10.992 [2024-12-05 20:08:12.203100] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:15:10.992 [2024-12-05 20:08:12.203227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76963 ] 00:15:10.992 [2024-12-05 20:08:12.376943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.251 [2024-12-05 20:08:12.492301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.510 [2024-12-05 20:08:12.687731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.511 [2024-12-05 20:08:12.687767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:15:11.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:11.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:11.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.769 BaseBdev1_malloc 00:15:11.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:11.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.769 [2024-12-05 20:08:13.094787] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:11.769 [2024-12-05 20:08:13.094855] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.769 [2024-12-05 20:08:13.094880] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:11.769 [2024-12-05 20:08:13.094901] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.769 [2024-12-05 20:08:13.097034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.769 BaseBdev1 00:15:11.769 [2024-12-05 20:08:13.097146] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:11.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:11.770 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:11.770 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.770 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.770 BaseBdev2_malloc 00:15:11.770 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.770 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:11.770 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.770 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.770 [2024-12-05 20:08:13.142468] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:11.770 [2024-12-05 20:08:13.142530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.770 [2024-12-05 20:08:13.142553] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:11.770 [2024-12-05 20:08:13.142565] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.770 [2024-12-05 20:08:13.144876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.770 [2024-12-05 20:08:13.144920] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:11.770 BaseBdev2 00:15:11.770 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.770 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:11.770 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.770 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.770 spare_malloc 00:15:11.770 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.770 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:11.770 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.770 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.030 spare_delay 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.030 [2024-12-05 20:08:13.211686] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:12.030 [2024-12-05 20:08:13.211747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.030 [2024-12-05 20:08:13.211768] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:12.030 [2024-12-05 20:08:13.211778] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.030 [2024-12-05 20:08:13.214103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.030 [2024-12-05 20:08:13.214144] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:12.030 spare 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.030 [2024-12-05 20:08:13.219731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.030 [2024-12-05 20:08:13.221673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.030 [2024-12-05 20:08:13.221866] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:12.030 [2024-12-05 20:08:13.221902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:12.030 [2024-12-05 20:08:13.222188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:12.030 [2024-12-05 20:08:13.222399] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:12.030 [2024-12-05 20:08:13.222409] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:12.030 [2024-12-05 20:08:13.222575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.030 "name": "raid_bdev1", 00:15:12.030 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:12.030 "strip_size_kb": 0, 00:15:12.030 "state": "online", 00:15:12.030 "raid_level": "raid1", 00:15:12.030 "superblock": true, 00:15:12.030 "num_base_bdevs": 2, 00:15:12.030 "num_base_bdevs_discovered": 2, 00:15:12.030 "num_base_bdevs_operational": 2, 00:15:12.030 "base_bdevs_list": [ 00:15:12.030 { 00:15:12.030 "name": "BaseBdev1", 00:15:12.030 "uuid": "eabe943a-070a-50bd-9e97-467460c695a2", 00:15:12.030 "is_configured": true, 00:15:12.030 "data_offset": 2048, 00:15:12.030 "data_size": 63488 00:15:12.030 }, 00:15:12.030 { 00:15:12.030 "name": "BaseBdev2", 00:15:12.030 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:12.030 "is_configured": true, 00:15:12.030 "data_offset": 2048, 00:15:12.030 "data_size": 63488 00:15:12.030 } 00:15:12.030 ] 00:15:12.030 }' 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.289 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:12.289 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.289 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.289 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:12.289 [2024-12-05 20:08:13.675282] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.289 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.549 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:12.549 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.549 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:12.550 [2024-12-05 20:08:13.782774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.550 "name": "raid_bdev1", 00:15:12.550 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:12.550 "strip_size_kb": 0, 00:15:12.550 "state": "online", 00:15:12.550 "raid_level": "raid1", 00:15:12.550 "superblock": true, 00:15:12.550 "num_base_bdevs": 2, 00:15:12.550 "num_base_bdevs_discovered": 1, 00:15:12.550 "num_base_bdevs_operational": 1, 00:15:12.550 "base_bdevs_list": [ 00:15:12.550 { 00:15:12.550 "name": null, 00:15:12.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.550 "is_configured": false, 00:15:12.550 "data_offset": 0, 00:15:12.550 "data_size": 63488 00:15:12.550 }, 00:15:12.550 { 00:15:12.550 "name": "BaseBdev2", 00:15:12.550 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:12.550 "is_configured": true, 00:15:12.550 "data_offset": 2048, 00:15:12.550 "data_size": 63488 00:15:12.550 } 00:15:12.550 ] 00:15:12.550 }' 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.550 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.550 [2024-12-05 20:08:13.865966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:12.550 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:12.550 Zero copy mechanism will not be used. 00:15:12.550 Running I/O for 60 seconds... 00:15:12.809 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:12.809 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.809 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.809 [2024-12-05 20:08:14.212135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.068 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.068 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:13.068 [2024-12-05 20:08:14.279746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:13.068 [2024-12-05 20:08:14.281787] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:13.068 [2024-12-05 20:08:14.396293] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:13.068 [2024-12-05 20:08:14.396910] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:13.328 [2024-12-05 20:08:14.606541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:13.328 [2024-12-05 20:08:14.606981] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:13.587 [2024-12-05 20:08:14.832067] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:13.587 [2024-12-05 20:08:14.832713] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:13.869 194.00 IOPS, 582.00 MiB/s [2024-12-05T20:08:15.306Z] [2024-12-05 20:08:15.046965] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:13.869 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.869 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.869 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.869 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.869 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.869 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.869 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.869 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.869 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.869 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.135 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.135 "name": "raid_bdev1", 00:15:14.135 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:14.135 "strip_size_kb": 0, 00:15:14.135 "state": "online", 00:15:14.135 "raid_level": "raid1", 00:15:14.135 "superblock": true, 00:15:14.135 "num_base_bdevs": 2, 00:15:14.135 "num_base_bdevs_discovered": 2, 00:15:14.135 "num_base_bdevs_operational": 2, 00:15:14.135 "process": { 00:15:14.135 "type": "rebuild", 00:15:14.135 "target": "spare", 00:15:14.135 "progress": { 00:15:14.135 "blocks": 12288, 00:15:14.135 "percent": 19 00:15:14.135 } 00:15:14.135 }, 00:15:14.135 "base_bdevs_list": [ 00:15:14.135 { 00:15:14.135 "name": "spare", 00:15:14.135 "uuid": "133ee721-4a24-59c7-9070-4e0a9eb089f9", 00:15:14.135 "is_configured": true, 00:15:14.135 "data_offset": 2048, 00:15:14.135 "data_size": 63488 00:15:14.135 }, 00:15:14.135 { 00:15:14.135 "name": "BaseBdev2", 00:15:14.135 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:14.135 "is_configured": true, 00:15:14.135 "data_offset": 2048, 00:15:14.135 "data_size": 63488 00:15:14.135 } 00:15:14.135 ] 00:15:14.135 }' 00:15:14.135 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.135 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.135 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.135 [2024-12-05 20:08:15.379536] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:14.135 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.135 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:14.135 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.135 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.135 [2024-12-05 20:08:15.424485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.394 [2024-12-05 20:08:15.615630] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:14.394 [2024-12-05 20:08:15.624638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.394 [2024-12-05 20:08:15.624769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.394 [2024-12-05 20:08:15.624803] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:14.394 [2024-12-05 20:08:15.672935] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.394 "name": "raid_bdev1", 00:15:14.394 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:14.394 "strip_size_kb": 0, 00:15:14.394 "state": "online", 00:15:14.394 "raid_level": "raid1", 00:15:14.394 "superblock": true, 00:15:14.394 "num_base_bdevs": 2, 00:15:14.394 "num_base_bdevs_discovered": 1, 00:15:14.394 "num_base_bdevs_operational": 1, 00:15:14.394 "base_bdevs_list": [ 00:15:14.394 { 00:15:14.394 "name": null, 00:15:14.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.394 "is_configured": false, 00:15:14.394 "data_offset": 0, 00:15:14.394 "data_size": 63488 00:15:14.394 }, 00:15:14.394 { 00:15:14.394 "name": "BaseBdev2", 00:15:14.394 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:14.394 "is_configured": true, 00:15:14.394 "data_offset": 2048, 00:15:14.394 "data_size": 63488 00:15:14.394 } 00:15:14.394 ] 00:15:14.394 }' 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.394 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.913 167.00 IOPS, 501.00 MiB/s [2024-12-05T20:08:16.350Z] 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.913 "name": "raid_bdev1", 00:15:14.913 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:14.913 "strip_size_kb": 0, 00:15:14.913 "state": "online", 00:15:14.913 "raid_level": "raid1", 00:15:14.913 "superblock": true, 00:15:14.913 "num_base_bdevs": 2, 00:15:14.913 "num_base_bdevs_discovered": 1, 00:15:14.913 "num_base_bdevs_operational": 1, 00:15:14.913 "base_bdevs_list": [ 00:15:14.913 { 00:15:14.913 "name": null, 00:15:14.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.913 "is_configured": false, 00:15:14.913 "data_offset": 0, 00:15:14.913 "data_size": 63488 00:15:14.913 }, 00:15:14.913 { 00:15:14.913 "name": "BaseBdev2", 00:15:14.913 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:14.913 "is_configured": true, 00:15:14.913 "data_offset": 2048, 00:15:14.913 "data_size": 63488 00:15:14.913 } 00:15:14.913 ] 00:15:14.913 }' 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.913 [2024-12-05 20:08:16.242631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.913 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:14.913 [2024-12-05 20:08:16.304701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:14.913 [2024-12-05 20:08:16.306599] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:15.173 [2024-12-05 20:08:16.407175] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:15.173 [2024-12-05 20:08:16.407788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:15.173 [2024-12-05 20:08:16.530589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:15.173 [2024-12-05 20:08:16.531072] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:15.741 164.33 IOPS, 493.00 MiB/s [2024-12-05T20:08:17.178Z] [2024-12-05 20:08:16.901108] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:15.741 [2024-12-05 20:08:17.130653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:16.000 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.000 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.000 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.000 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.000 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.000 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.000 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.000 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.000 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.000 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.000 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.000 "name": "raid_bdev1", 00:15:16.000 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:16.000 "strip_size_kb": 0, 00:15:16.000 "state": "online", 00:15:16.000 "raid_level": "raid1", 00:15:16.000 "superblock": true, 00:15:16.000 "num_base_bdevs": 2, 00:15:16.000 "num_base_bdevs_discovered": 2, 00:15:16.000 "num_base_bdevs_operational": 2, 00:15:16.000 "process": { 00:15:16.000 "type": "rebuild", 00:15:16.000 "target": "spare", 00:15:16.000 "progress": { 00:15:16.000 "blocks": 10240, 00:15:16.000 "percent": 16 00:15:16.000 } 00:15:16.000 }, 00:15:16.000 "base_bdevs_list": [ 00:15:16.000 { 00:15:16.000 "name": "spare", 00:15:16.000 "uuid": "133ee721-4a24-59c7-9070-4e0a9eb089f9", 00:15:16.000 "is_configured": true, 00:15:16.000 "data_offset": 2048, 00:15:16.000 "data_size": 63488 00:15:16.000 }, 00:15:16.000 { 00:15:16.000 "name": "BaseBdev2", 00:15:16.000 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:16.000 "is_configured": true, 00:15:16.000 "data_offset": 2048, 00:15:16.000 "data_size": 63488 00:15:16.000 } 00:15:16.000 ] 00:15:16.000 }' 00:15:16.000 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.000 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.000 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.000 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.000 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:16.259 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:16.259 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:16.259 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=419 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.260 "name": "raid_bdev1", 00:15:16.260 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:16.260 "strip_size_kb": 0, 00:15:16.260 "state": "online", 00:15:16.260 "raid_level": "raid1", 00:15:16.260 "superblock": true, 00:15:16.260 "num_base_bdevs": 2, 00:15:16.260 "num_base_bdevs_discovered": 2, 00:15:16.260 "num_base_bdevs_operational": 2, 00:15:16.260 "process": { 00:15:16.260 "type": "rebuild", 00:15:16.260 "target": "spare", 00:15:16.260 "progress": { 00:15:16.260 "blocks": 12288, 00:15:16.260 "percent": 19 00:15:16.260 } 00:15:16.260 }, 00:15:16.260 "base_bdevs_list": [ 00:15:16.260 { 00:15:16.260 "name": "spare", 00:15:16.260 "uuid": "133ee721-4a24-59c7-9070-4e0a9eb089f9", 00:15:16.260 "is_configured": true, 00:15:16.260 "data_offset": 2048, 00:15:16.260 "data_size": 63488 00:15:16.260 }, 00:15:16.260 { 00:15:16.260 "name": "BaseBdev2", 00:15:16.260 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:16.260 "is_configured": true, 00:15:16.260 "data_offset": 2048, 00:15:16.260 "data_size": 63488 00:15:16.260 } 00:15:16.260 ] 00:15:16.260 }' 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.260 [2024-12-05 20:08:17.573415] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.260 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.519 140.00 IOPS, 420.00 MiB/s [2024-12-05T20:08:17.956Z] [2024-12-05 20:08:17.916326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:16.519 [2024-12-05 20:08:17.916998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:17.087 [2024-12-05 20:08:18.466563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:17.087 [2024-12-05 20:08:18.472755] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:17.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.347 "name": "raid_bdev1", 00:15:17.347 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:17.347 "strip_size_kb": 0, 00:15:17.347 "state": "online", 00:15:17.347 "raid_level": "raid1", 00:15:17.347 "superblock": true, 00:15:17.347 "num_base_bdevs": 2, 00:15:17.347 "num_base_bdevs_discovered": 2, 00:15:17.347 "num_base_bdevs_operational": 2, 00:15:17.347 "process": { 00:15:17.347 "type": "rebuild", 00:15:17.347 "target": "spare", 00:15:17.347 "progress": { 00:15:17.347 "blocks": 28672, 00:15:17.347 "percent": 45 00:15:17.347 } 00:15:17.347 }, 00:15:17.347 "base_bdevs_list": [ 00:15:17.347 { 00:15:17.347 "name": "spare", 00:15:17.347 "uuid": "133ee721-4a24-59c7-9070-4e0a9eb089f9", 00:15:17.347 "is_configured": true, 00:15:17.347 "data_offset": 2048, 00:15:17.347 "data_size": 63488 00:15:17.347 }, 00:15:17.347 { 00:15:17.347 "name": "BaseBdev2", 00:15:17.347 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:17.347 "is_configured": true, 00:15:17.347 "data_offset": 2048, 00:15:17.347 "data_size": 63488 00:15:17.347 } 00:15:17.347 ] 00:15:17.347 }' 00:15:17.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.865 121.00 IOPS, 363.00 MiB/s [2024-12-05T20:08:19.302Z] [2024-12-05 20:08:19.158404] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:18.124 [2024-12-05 20:08:19.366370] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:18.384 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.384 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.384 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.384 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.384 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.384 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.384 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.384 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.384 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.384 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.384 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.384 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.384 "name": "raid_bdev1", 00:15:18.384 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:18.384 "strip_size_kb": 0, 00:15:18.384 "state": "online", 00:15:18.384 "raid_level": "raid1", 00:15:18.384 "superblock": true, 00:15:18.384 "num_base_bdevs": 2, 00:15:18.384 "num_base_bdevs_discovered": 2, 00:15:18.384 "num_base_bdevs_operational": 2, 00:15:18.384 "process": { 00:15:18.384 "type": "rebuild", 00:15:18.384 "target": "spare", 00:15:18.384 "progress": { 00:15:18.384 "blocks": 45056, 00:15:18.384 "percent": 70 00:15:18.384 } 00:15:18.384 }, 00:15:18.384 "base_bdevs_list": [ 00:15:18.384 { 00:15:18.384 "name": "spare", 00:15:18.384 "uuid": "133ee721-4a24-59c7-9070-4e0a9eb089f9", 00:15:18.384 "is_configured": true, 00:15:18.384 "data_offset": 2048, 00:15:18.384 "data_size": 63488 00:15:18.384 }, 00:15:18.384 { 00:15:18.384 "name": "BaseBdev2", 00:15:18.384 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:18.384 "is_configured": true, 00:15:18.384 "data_offset": 2048, 00:15:18.384 "data_size": 63488 00:15:18.384 } 00:15:18.384 ] 00:15:18.384 }' 00:15:18.384 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.644 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.644 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.645 107.83 IOPS, 323.50 MiB/s [2024-12-05T20:08:20.082Z] 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.645 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:19.212 [2024-12-05 20:08:20.380938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:19.471 [2024-12-05 20:08:20.711349] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:19.471 [2024-12-05 20:08:20.811275] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:19.471 [2024-12-05 20:08:20.813419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.471 98.71 IOPS, 296.14 MiB/s [2024-12-05T20:08:20.908Z] 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:19.471 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.471 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.471 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.471 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.471 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.471 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.471 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.471 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.471 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.731 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.731 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.731 "name": "raid_bdev1", 00:15:19.731 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:19.731 "strip_size_kb": 0, 00:15:19.731 "state": "online", 00:15:19.731 "raid_level": "raid1", 00:15:19.731 "superblock": true, 00:15:19.731 "num_base_bdevs": 2, 00:15:19.731 "num_base_bdevs_discovered": 2, 00:15:19.731 "num_base_bdevs_operational": 2, 00:15:19.731 "base_bdevs_list": [ 00:15:19.731 { 00:15:19.731 "name": "spare", 00:15:19.731 "uuid": "133ee721-4a24-59c7-9070-4e0a9eb089f9", 00:15:19.731 "is_configured": true, 00:15:19.731 "data_offset": 2048, 00:15:19.731 "data_size": 63488 00:15:19.731 }, 00:15:19.731 { 00:15:19.731 "name": "BaseBdev2", 00:15:19.731 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:19.731 "is_configured": true, 00:15:19.731 "data_offset": 2048, 00:15:19.731 "data_size": 63488 00:15:19.731 } 00:15:19.731 ] 00:15:19.731 }' 00:15:19.731 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.731 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:19.731 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.731 "name": "raid_bdev1", 00:15:19.731 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:19.731 "strip_size_kb": 0, 00:15:19.731 "state": "online", 00:15:19.731 "raid_level": "raid1", 00:15:19.731 "superblock": true, 00:15:19.731 "num_base_bdevs": 2, 00:15:19.731 "num_base_bdevs_discovered": 2, 00:15:19.731 "num_base_bdevs_operational": 2, 00:15:19.731 "base_bdevs_list": [ 00:15:19.731 { 00:15:19.731 "name": "spare", 00:15:19.731 "uuid": "133ee721-4a24-59c7-9070-4e0a9eb089f9", 00:15:19.731 "is_configured": true, 00:15:19.731 "data_offset": 2048, 00:15:19.731 "data_size": 63488 00:15:19.731 }, 00:15:19.731 { 00:15:19.731 "name": "BaseBdev2", 00:15:19.731 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:19.731 "is_configured": true, 00:15:19.731 "data_offset": 2048, 00:15:19.731 "data_size": 63488 00:15:19.731 } 00:15:19.731 ] 00:15:19.731 }' 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.731 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.991 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.991 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.991 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.991 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.991 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.991 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.991 "name": "raid_bdev1", 00:15:19.991 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:19.991 "strip_size_kb": 0, 00:15:19.991 "state": "online", 00:15:19.991 "raid_level": "raid1", 00:15:19.991 "superblock": true, 00:15:19.991 "num_base_bdevs": 2, 00:15:19.991 "num_base_bdevs_discovered": 2, 00:15:19.991 "num_base_bdevs_operational": 2, 00:15:19.991 "base_bdevs_list": [ 00:15:19.991 { 00:15:19.991 "name": "spare", 00:15:19.991 "uuid": "133ee721-4a24-59c7-9070-4e0a9eb089f9", 00:15:19.991 "is_configured": true, 00:15:19.991 "data_offset": 2048, 00:15:19.991 "data_size": 63488 00:15:19.991 }, 00:15:19.991 { 00:15:19.991 "name": "BaseBdev2", 00:15:19.991 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:19.991 "is_configured": true, 00:15:19.991 "data_offset": 2048, 00:15:19.991 "data_size": 63488 00:15:19.991 } 00:15:19.991 ] 00:15:19.991 }' 00:15:19.991 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.991 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.251 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:20.251 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.251 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.251 [2024-12-05 20:08:21.667078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.251 [2024-12-05 20:08:21.667175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.510 00:15:20.510 Latency(us) 00:15:20.510 [2024-12-05T20:08:21.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.510 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:20.510 raid_bdev1 : 7.85 90.66 271.97 0.00 0.00 16382.75 307.65 130957.53 00:15:20.510 [2024-12-05T20:08:21.947Z] =================================================================================================================== 00:15:20.510 [2024-12-05T20:08:21.947Z] Total : 90.66 271.97 0.00 0.00 16382.75 307.65 130957.53 00:15:20.510 { 00:15:20.510 "results": [ 00:15:20.510 { 00:15:20.510 "job": "raid_bdev1", 00:15:20.510 "core_mask": "0x1", 00:15:20.510 "workload": "randrw", 00:15:20.510 "percentage": 50, 00:15:20.510 "status": "finished", 00:15:20.510 "queue_depth": 2, 00:15:20.510 "io_size": 3145728, 00:15:20.510 "runtime": 7.853916, 00:15:20.510 "iops": 90.65541317223153, 00:15:20.510 "mibps": 271.9662395166946, 00:15:20.510 "io_failed": 0, 00:15:20.510 "io_timeout": 0, 00:15:20.510 "avg_latency_us": 16382.750208527548, 00:15:20.510 "min_latency_us": 307.6471615720524, 00:15:20.510 "max_latency_us": 130957.52663755459 00:15:20.510 } 00:15:20.510 ], 00:15:20.510 "core_count": 1 00:15:20.510 } 00:15:20.510 [2024-12-05 20:08:21.729141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.510 [2024-12-05 20:08:21.729212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.510 [2024-12-05 20:08:21.729297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.510 [2024-12-05 20:08:21.729308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:20.510 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:20.769 /dev/nbd0 00:15:20.769 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:20.769 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:20.769 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:20.769 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:20.769 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.770 1+0 records in 00:15:20.770 1+0 records out 00:15:20.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440625 s, 9.3 MB/s 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:20.770 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:21.029 /dev/nbd1 00:15:21.029 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:21.029 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:21.029 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:21.029 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:21.029 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:21.029 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:21.029 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:21.029 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:21.029 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:21.029 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:21.030 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.030 1+0 records in 00:15:21.030 1+0 records out 00:15:21.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052693 s, 7.8 MB/s 00:15:21.030 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.030 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:21.030 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.030 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:21.030 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:21.030 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.030 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.030 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:21.030 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:21.030 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.030 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:21.030 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:21.030 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:21.030 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.030 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:21.289 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:21.289 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:21.289 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:21.289 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.289 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.289 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:21.289 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:21.289 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.289 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:21.289 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.289 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:21.289 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:21.289 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:21.289 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.289 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.548 [2024-12-05 20:08:22.934993] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:21.548 [2024-12-05 20:08:22.935060] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.548 [2024-12-05 20:08:22.935104] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:21.548 [2024-12-05 20:08:22.935113] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.548 [2024-12-05 20:08:22.937343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.548 [2024-12-05 20:08:22.937431] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:21.548 [2024-12-05 20:08:22.937536] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:21.548 [2024-12-05 20:08:22.937593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:21.548 [2024-12-05 20:08:22.937745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:21.548 spare 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.548 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.854 [2024-12-05 20:08:23.037646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:21.854 [2024-12-05 20:08:23.037684] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:21.854 [2024-12-05 20:08:23.038013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:15:21.854 [2024-12-05 20:08:23.038208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:21.854 [2024-12-05 20:08:23.038220] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:21.854 [2024-12-05 20:08:23.038419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.854 "name": "raid_bdev1", 00:15:21.854 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:21.854 "strip_size_kb": 0, 00:15:21.854 "state": "online", 00:15:21.854 "raid_level": "raid1", 00:15:21.854 "superblock": true, 00:15:21.854 "num_base_bdevs": 2, 00:15:21.854 "num_base_bdevs_discovered": 2, 00:15:21.854 "num_base_bdevs_operational": 2, 00:15:21.854 "base_bdevs_list": [ 00:15:21.854 { 00:15:21.854 "name": "spare", 00:15:21.854 "uuid": "133ee721-4a24-59c7-9070-4e0a9eb089f9", 00:15:21.854 "is_configured": true, 00:15:21.854 "data_offset": 2048, 00:15:21.854 "data_size": 63488 00:15:21.854 }, 00:15:21.854 { 00:15:21.854 "name": "BaseBdev2", 00:15:21.854 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:21.854 "is_configured": true, 00:15:21.854 "data_offset": 2048, 00:15:21.854 "data_size": 63488 00:15:21.854 } 00:15:21.854 ] 00:15:21.854 }' 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.854 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.133 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.133 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.133 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.133 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.133 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.133 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.133 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.133 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.133 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.133 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.133 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.133 "name": "raid_bdev1", 00:15:22.133 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:22.133 "strip_size_kb": 0, 00:15:22.133 "state": "online", 00:15:22.133 "raid_level": "raid1", 00:15:22.133 "superblock": true, 00:15:22.133 "num_base_bdevs": 2, 00:15:22.133 "num_base_bdevs_discovered": 2, 00:15:22.133 "num_base_bdevs_operational": 2, 00:15:22.133 "base_bdevs_list": [ 00:15:22.133 { 00:15:22.133 "name": "spare", 00:15:22.133 "uuid": "133ee721-4a24-59c7-9070-4e0a9eb089f9", 00:15:22.133 "is_configured": true, 00:15:22.133 "data_offset": 2048, 00:15:22.133 "data_size": 63488 00:15:22.133 }, 00:15:22.133 { 00:15:22.133 "name": "BaseBdev2", 00:15:22.133 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:22.133 "is_configured": true, 00:15:22.133 "data_offset": 2048, 00:15:22.133 "data_size": 63488 00:15:22.133 } 00:15:22.133 ] 00:15:22.133 }' 00:15:22.133 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.133 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.133 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.392 [2024-12-05 20:08:23.665961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.392 "name": "raid_bdev1", 00:15:22.392 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:22.392 "strip_size_kb": 0, 00:15:22.392 "state": "online", 00:15:22.392 "raid_level": "raid1", 00:15:22.392 "superblock": true, 00:15:22.392 "num_base_bdevs": 2, 00:15:22.392 "num_base_bdevs_discovered": 1, 00:15:22.392 "num_base_bdevs_operational": 1, 00:15:22.392 "base_bdevs_list": [ 00:15:22.392 { 00:15:22.392 "name": null, 00:15:22.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.392 "is_configured": false, 00:15:22.392 "data_offset": 0, 00:15:22.392 "data_size": 63488 00:15:22.392 }, 00:15:22.392 { 00:15:22.392 "name": "BaseBdev2", 00:15:22.392 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:22.392 "is_configured": true, 00:15:22.392 "data_offset": 2048, 00:15:22.392 "data_size": 63488 00:15:22.392 } 00:15:22.392 ] 00:15:22.392 }' 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.392 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.959 20:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:22.959 20:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.959 20:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.959 [2024-12-05 20:08:24.105260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.959 [2024-12-05 20:08:24.105467] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:22.959 [2024-12-05 20:08:24.105485] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:22.959 [2024-12-05 20:08:24.105540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.959 [2024-12-05 20:08:24.122603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:15:22.960 20:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.960 20:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:22.960 [2024-12-05 20:08:24.124503] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.897 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.897 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.897 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.897 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.897 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.897 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.897 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.897 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.897 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.897 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.897 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.897 "name": "raid_bdev1", 00:15:23.897 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:23.897 "strip_size_kb": 0, 00:15:23.897 "state": "online", 00:15:23.897 "raid_level": "raid1", 00:15:23.897 "superblock": true, 00:15:23.897 "num_base_bdevs": 2, 00:15:23.897 "num_base_bdevs_discovered": 2, 00:15:23.897 "num_base_bdevs_operational": 2, 00:15:23.897 "process": { 00:15:23.897 "type": "rebuild", 00:15:23.897 "target": "spare", 00:15:23.897 "progress": { 00:15:23.897 "blocks": 20480, 00:15:23.897 "percent": 32 00:15:23.897 } 00:15:23.897 }, 00:15:23.897 "base_bdevs_list": [ 00:15:23.897 { 00:15:23.897 "name": "spare", 00:15:23.897 "uuid": "133ee721-4a24-59c7-9070-4e0a9eb089f9", 00:15:23.897 "is_configured": true, 00:15:23.897 "data_offset": 2048, 00:15:23.897 "data_size": 63488 00:15:23.897 }, 00:15:23.897 { 00:15:23.897 "name": "BaseBdev2", 00:15:23.897 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:23.897 "is_configured": true, 00:15:23.897 "data_offset": 2048, 00:15:23.897 "data_size": 63488 00:15:23.897 } 00:15:23.897 ] 00:15:23.897 }' 00:15:23.898 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.898 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.898 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.898 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.898 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:23.898 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.898 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.898 [2024-12-05 20:08:25.264783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.898 [2024-12-05 20:08:25.330106] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:23.898 [2024-12-05 20:08:25.330229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.898 [2024-12-05 20:08:25.330270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.898 [2024-12-05 20:08:25.330294] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.156 "name": "raid_bdev1", 00:15:24.156 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:24.156 "strip_size_kb": 0, 00:15:24.156 "state": "online", 00:15:24.156 "raid_level": "raid1", 00:15:24.156 "superblock": true, 00:15:24.156 "num_base_bdevs": 2, 00:15:24.156 "num_base_bdevs_discovered": 1, 00:15:24.156 "num_base_bdevs_operational": 1, 00:15:24.156 "base_bdevs_list": [ 00:15:24.156 { 00:15:24.156 "name": null, 00:15:24.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.156 "is_configured": false, 00:15:24.156 "data_offset": 0, 00:15:24.156 "data_size": 63488 00:15:24.156 }, 00:15:24.156 { 00:15:24.156 "name": "BaseBdev2", 00:15:24.156 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:24.156 "is_configured": true, 00:15:24.156 "data_offset": 2048, 00:15:24.156 "data_size": 63488 00:15:24.156 } 00:15:24.156 ] 00:15:24.156 }' 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.156 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.431 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:24.431 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.431 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.431 [2024-12-05 20:08:25.797550] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:24.431 [2024-12-05 20:08:25.797624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.431 [2024-12-05 20:08:25.797649] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:24.431 [2024-12-05 20:08:25.797663] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.431 [2024-12-05 20:08:25.798171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.431 [2024-12-05 20:08:25.798194] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:24.431 [2024-12-05 20:08:25.798294] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:24.431 [2024-12-05 20:08:25.798309] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:24.431 [2024-12-05 20:08:25.798318] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:24.431 [2024-12-05 20:08:25.798340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.431 [2024-12-05 20:08:25.814512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:15:24.431 spare 00:15:24.431 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.431 20:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:24.431 [2024-12-05 20:08:25.816298] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.809 "name": "raid_bdev1", 00:15:25.809 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:25.809 "strip_size_kb": 0, 00:15:25.809 "state": "online", 00:15:25.809 "raid_level": "raid1", 00:15:25.809 "superblock": true, 00:15:25.809 "num_base_bdevs": 2, 00:15:25.809 "num_base_bdevs_discovered": 2, 00:15:25.809 "num_base_bdevs_operational": 2, 00:15:25.809 "process": { 00:15:25.809 "type": "rebuild", 00:15:25.809 "target": "spare", 00:15:25.809 "progress": { 00:15:25.809 "blocks": 20480, 00:15:25.809 "percent": 32 00:15:25.809 } 00:15:25.809 }, 00:15:25.809 "base_bdevs_list": [ 00:15:25.809 { 00:15:25.809 "name": "spare", 00:15:25.809 "uuid": "133ee721-4a24-59c7-9070-4e0a9eb089f9", 00:15:25.809 "is_configured": true, 00:15:25.809 "data_offset": 2048, 00:15:25.809 "data_size": 63488 00:15:25.809 }, 00:15:25.809 { 00:15:25.809 "name": "BaseBdev2", 00:15:25.809 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:25.809 "is_configured": true, 00:15:25.809 "data_offset": 2048, 00:15:25.809 "data_size": 63488 00:15:25.809 } 00:15:25.809 ] 00:15:25.809 }' 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.809 20:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.809 [2024-12-05 20:08:26.971882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.809 [2024-12-05 20:08:27.022229] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:25.809 [2024-12-05 20:08:27.022319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.809 [2024-12-05 20:08:27.022357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.809 [2024-12-05 20:08:27.022366] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.809 "name": "raid_bdev1", 00:15:25.809 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:25.809 "strip_size_kb": 0, 00:15:25.809 "state": "online", 00:15:25.809 "raid_level": "raid1", 00:15:25.809 "superblock": true, 00:15:25.809 "num_base_bdevs": 2, 00:15:25.809 "num_base_bdevs_discovered": 1, 00:15:25.809 "num_base_bdevs_operational": 1, 00:15:25.809 "base_bdevs_list": [ 00:15:25.809 { 00:15:25.809 "name": null, 00:15:25.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.809 "is_configured": false, 00:15:25.809 "data_offset": 0, 00:15:25.809 "data_size": 63488 00:15:25.809 }, 00:15:25.809 { 00:15:25.809 "name": "BaseBdev2", 00:15:25.809 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:25.809 "is_configured": true, 00:15:25.809 "data_offset": 2048, 00:15:25.809 "data_size": 63488 00:15:25.809 } 00:15:25.809 ] 00:15:25.809 }' 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.809 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.381 "name": "raid_bdev1", 00:15:26.381 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:26.381 "strip_size_kb": 0, 00:15:26.381 "state": "online", 00:15:26.381 "raid_level": "raid1", 00:15:26.381 "superblock": true, 00:15:26.381 "num_base_bdevs": 2, 00:15:26.381 "num_base_bdevs_discovered": 1, 00:15:26.381 "num_base_bdevs_operational": 1, 00:15:26.381 "base_bdevs_list": [ 00:15:26.381 { 00:15:26.381 "name": null, 00:15:26.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.381 "is_configured": false, 00:15:26.381 "data_offset": 0, 00:15:26.381 "data_size": 63488 00:15:26.381 }, 00:15:26.381 { 00:15:26.381 "name": "BaseBdev2", 00:15:26.381 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:26.381 "is_configured": true, 00:15:26.381 "data_offset": 2048, 00:15:26.381 "data_size": 63488 00:15:26.381 } 00:15:26.381 ] 00:15:26.381 }' 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.381 [2024-12-05 20:08:27.678637] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:26.381 [2024-12-05 20:08:27.678697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.381 [2024-12-05 20:08:27.678729] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:26.381 [2024-12-05 20:08:27.678741] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.381 [2024-12-05 20:08:27.679223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.381 [2024-12-05 20:08:27.679247] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:26.381 [2024-12-05 20:08:27.679335] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:26.381 [2024-12-05 20:08:27.679349] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:26.381 [2024-12-05 20:08:27.679361] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:26.381 [2024-12-05 20:08:27.679372] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:26.381 BaseBdev1 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.381 20:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:27.319 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:27.320 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.320 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.320 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.320 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.320 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:27.320 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.320 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.320 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.320 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.320 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.320 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.320 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.320 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.320 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.320 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.320 "name": "raid_bdev1", 00:15:27.320 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:27.320 "strip_size_kb": 0, 00:15:27.320 "state": "online", 00:15:27.320 "raid_level": "raid1", 00:15:27.320 "superblock": true, 00:15:27.320 "num_base_bdevs": 2, 00:15:27.320 "num_base_bdevs_discovered": 1, 00:15:27.320 "num_base_bdevs_operational": 1, 00:15:27.320 "base_bdevs_list": [ 00:15:27.320 { 00:15:27.320 "name": null, 00:15:27.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.320 "is_configured": false, 00:15:27.320 "data_offset": 0, 00:15:27.320 "data_size": 63488 00:15:27.320 }, 00:15:27.320 { 00:15:27.320 "name": "BaseBdev2", 00:15:27.320 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:27.320 "is_configured": true, 00:15:27.320 "data_offset": 2048, 00:15:27.320 "data_size": 63488 00:15:27.320 } 00:15:27.320 ] 00:15:27.320 }' 00:15:27.320 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.320 20:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.889 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.889 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.889 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.889 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.889 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.889 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.889 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.889 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.889 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.889 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.889 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.889 "name": "raid_bdev1", 00:15:27.889 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:27.889 "strip_size_kb": 0, 00:15:27.889 "state": "online", 00:15:27.889 "raid_level": "raid1", 00:15:27.889 "superblock": true, 00:15:27.889 "num_base_bdevs": 2, 00:15:27.889 "num_base_bdevs_discovered": 1, 00:15:27.889 "num_base_bdevs_operational": 1, 00:15:27.889 "base_bdevs_list": [ 00:15:27.889 { 00:15:27.889 "name": null, 00:15:27.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.889 "is_configured": false, 00:15:27.889 "data_offset": 0, 00:15:27.889 "data_size": 63488 00:15:27.889 }, 00:15:27.889 { 00:15:27.889 "name": "BaseBdev2", 00:15:27.889 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:27.889 "is_configured": true, 00:15:27.889 "data_offset": 2048, 00:15:27.889 "data_size": 63488 00:15:27.889 } 00:15:27.889 ] 00:15:27.889 }' 00:15:27.889 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.889 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:27.889 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.889 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:27.889 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:27.890 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:27.890 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:27.890 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:27.890 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.890 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:27.890 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.890 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:27.890 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.890 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.890 [2024-12-05 20:08:29.276223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.890 [2024-12-05 20:08:29.276438] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:27.890 [2024-12-05 20:08:29.276508] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:27.890 request: 00:15:27.890 { 00:15:27.890 "base_bdev": "BaseBdev1", 00:15:27.890 "raid_bdev": "raid_bdev1", 00:15:27.890 "method": "bdev_raid_add_base_bdev", 00:15:27.890 "req_id": 1 00:15:27.890 } 00:15:27.890 Got JSON-RPC error response 00:15:27.890 response: 00:15:27.890 { 00:15:27.890 "code": -22, 00:15:27.890 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:27.890 } 00:15:27.890 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:27.890 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:27.890 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:27.890 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:27.890 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:27.890 20:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:29.268 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:29.268 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.268 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.268 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.268 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.268 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:29.268 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.268 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.268 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.268 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.268 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.268 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.268 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.268 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.268 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.268 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.268 "name": "raid_bdev1", 00:15:29.268 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:29.268 "strip_size_kb": 0, 00:15:29.268 "state": "online", 00:15:29.268 "raid_level": "raid1", 00:15:29.268 "superblock": true, 00:15:29.268 "num_base_bdevs": 2, 00:15:29.268 "num_base_bdevs_discovered": 1, 00:15:29.268 "num_base_bdevs_operational": 1, 00:15:29.268 "base_bdevs_list": [ 00:15:29.268 { 00:15:29.268 "name": null, 00:15:29.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.268 "is_configured": false, 00:15:29.268 "data_offset": 0, 00:15:29.268 "data_size": 63488 00:15:29.268 }, 00:15:29.268 { 00:15:29.269 "name": "BaseBdev2", 00:15:29.269 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:29.269 "is_configured": true, 00:15:29.269 "data_offset": 2048, 00:15:29.269 "data_size": 63488 00:15:29.269 } 00:15:29.269 ] 00:15:29.269 }' 00:15:29.269 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.269 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.269 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.269 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.269 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.269 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.269 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.269 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.269 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.269 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.269 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.269 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.529 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.529 "name": "raid_bdev1", 00:15:29.529 "uuid": "175248d1-622c-4434-8d98-d033e5fa116b", 00:15:29.529 "strip_size_kb": 0, 00:15:29.529 "state": "online", 00:15:29.529 "raid_level": "raid1", 00:15:29.529 "superblock": true, 00:15:29.529 "num_base_bdevs": 2, 00:15:29.529 "num_base_bdevs_discovered": 1, 00:15:29.529 "num_base_bdevs_operational": 1, 00:15:29.529 "base_bdevs_list": [ 00:15:29.529 { 00:15:29.529 "name": null, 00:15:29.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.529 "is_configured": false, 00:15:29.529 "data_offset": 0, 00:15:29.529 "data_size": 63488 00:15:29.529 }, 00:15:29.529 { 00:15:29.529 "name": "BaseBdev2", 00:15:29.529 "uuid": "16ad79f8-5eb9-5983-819b-80f42fcb5fe5", 00:15:29.529 "is_configured": true, 00:15:29.529 "data_offset": 2048, 00:15:29.529 "data_size": 63488 00:15:29.529 } 00:15:29.529 ] 00:15:29.529 }' 00:15:29.529 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.529 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.529 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.529 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.529 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76963 00:15:29.529 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76963 ']' 00:15:29.529 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76963 00:15:29.529 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:29.529 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.529 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76963 00:15:29.529 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.529 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.529 killing process with pid 76963 00:15:29.529 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76963' 00:15:29.529 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76963 00:15:29.529 Received shutdown signal, test time was about 16.987874 seconds 00:15:29.529 00:15:29.529 Latency(us) 00:15:29.529 [2024-12-05T20:08:30.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.529 [2024-12-05T20:08:30.966Z] =================================================================================================================== 00:15:29.529 [2024-12-05T20:08:30.966Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:29.529 [2024-12-05 20:08:30.823314] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.529 20:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76963 00:15:29.529 [2024-12-05 20:08:30.823464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.529 [2024-12-05 20:08:30.823529] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.529 [2024-12-05 20:08:30.823542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:29.789 [2024-12-05 20:08:31.048802] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:31.178 00:15:31.178 real 0m20.111s 00:15:31.178 user 0m26.302s 00:15:31.178 sys 0m2.143s 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.178 ************************************ 00:15:31.178 END TEST raid_rebuild_test_sb_io 00:15:31.178 ************************************ 00:15:31.178 20:08:32 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:31.178 20:08:32 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:15:31.178 20:08:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:31.178 20:08:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.178 20:08:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:31.178 ************************************ 00:15:31.178 START TEST raid_rebuild_test 00:15:31.178 ************************************ 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77646 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:31.178 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77646 00:15:31.179 20:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77646 ']' 00:15:31.179 20:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.179 20:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.179 20:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.179 20:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.179 20:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.179 [2024-12-05 20:08:32.396183] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:15:31.179 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:31.179 Zero copy mechanism will not be used. 00:15:31.179 [2024-12-05 20:08:32.396397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77646 ] 00:15:31.179 [2024-12-05 20:08:32.569286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.440 [2024-12-05 20:08:32.686178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.699 [2024-12-05 20:08:32.883252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.699 [2024-12-05 20:08:32.883384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.959 BaseBdev1_malloc 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.959 [2024-12-05 20:08:33.289476] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:31.959 [2024-12-05 20:08:33.289541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.959 [2024-12-05 20:08:33.289564] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:31.959 [2024-12-05 20:08:33.289575] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.959 [2024-12-05 20:08:33.291690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.959 [2024-12-05 20:08:33.291734] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:31.959 BaseBdev1 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.959 BaseBdev2_malloc 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.959 [2024-12-05 20:08:33.343749] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:31.959 [2024-12-05 20:08:33.343810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.959 [2024-12-05 20:08:33.343835] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:31.959 [2024-12-05 20:08:33.343846] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.959 [2024-12-05 20:08:33.345917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.959 [2024-12-05 20:08:33.345956] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:31.959 BaseBdev2 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.959 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.219 BaseBdev3_malloc 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.219 [2024-12-05 20:08:33.406959] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:32.219 [2024-12-05 20:08:33.407065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.219 [2024-12-05 20:08:33.407113] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:32.219 [2024-12-05 20:08:33.407126] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.219 [2024-12-05 20:08:33.409380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.219 [2024-12-05 20:08:33.409422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:32.219 BaseBdev3 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.219 BaseBdev4_malloc 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.219 [2024-12-05 20:08:33.461195] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:32.219 [2024-12-05 20:08:33.461301] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.219 [2024-12-05 20:08:33.461328] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:32.219 [2024-12-05 20:08:33.461339] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.219 [2024-12-05 20:08:33.463390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.219 [2024-12-05 20:08:33.463432] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:32.219 BaseBdev4 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.219 spare_malloc 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.219 spare_delay 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.219 [2024-12-05 20:08:33.525284] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:32.219 [2024-12-05 20:08:33.525336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.219 [2024-12-05 20:08:33.525371] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:32.219 [2024-12-05 20:08:33.525381] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.219 [2024-12-05 20:08:33.527333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.219 [2024-12-05 20:08:33.527426] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:32.219 spare 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.219 [2024-12-05 20:08:33.537322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.219 [2024-12-05 20:08:33.539087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.219 [2024-12-05 20:08:33.539150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:32.219 [2024-12-05 20:08:33.539202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:32.219 [2024-12-05 20:08:33.539277] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:32.219 [2024-12-05 20:08:33.539290] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:32.219 [2024-12-05 20:08:33.539532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:32.219 [2024-12-05 20:08:33.539691] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:32.219 [2024-12-05 20:08:33.539702] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:32.219 [2024-12-05 20:08:33.539841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.219 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.219 "name": "raid_bdev1", 00:15:32.219 "uuid": "05644081-3d0a-471a-ac46-51922bca9d67", 00:15:32.219 "strip_size_kb": 0, 00:15:32.219 "state": "online", 00:15:32.219 "raid_level": "raid1", 00:15:32.219 "superblock": false, 00:15:32.219 "num_base_bdevs": 4, 00:15:32.219 "num_base_bdevs_discovered": 4, 00:15:32.219 "num_base_bdevs_operational": 4, 00:15:32.219 "base_bdevs_list": [ 00:15:32.219 { 00:15:32.220 "name": "BaseBdev1", 00:15:32.220 "uuid": "52cc1629-0d61-55c7-ad2a-d21793dc8396", 00:15:32.220 "is_configured": true, 00:15:32.220 "data_offset": 0, 00:15:32.220 "data_size": 65536 00:15:32.220 }, 00:15:32.220 { 00:15:32.220 "name": "BaseBdev2", 00:15:32.220 "uuid": "d1f093cd-3c21-5ae0-a38f-ac1186f11da2", 00:15:32.220 "is_configured": true, 00:15:32.220 "data_offset": 0, 00:15:32.220 "data_size": 65536 00:15:32.220 }, 00:15:32.220 { 00:15:32.220 "name": "BaseBdev3", 00:15:32.220 "uuid": "06e9d0fd-d61f-5c20-84da-03196663b03e", 00:15:32.220 "is_configured": true, 00:15:32.220 "data_offset": 0, 00:15:32.220 "data_size": 65536 00:15:32.220 }, 00:15:32.220 { 00:15:32.220 "name": "BaseBdev4", 00:15:32.220 "uuid": "fbcbf4ba-1807-5c05-af6e-50003cd9662b", 00:15:32.220 "is_configured": true, 00:15:32.220 "data_offset": 0, 00:15:32.220 "data_size": 65536 00:15:32.220 } 00:15:32.220 ] 00:15:32.220 }' 00:15:32.220 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.220 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.790 [2024-12-05 20:08:34.021179] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:32.790 20:08:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:33.050 [2024-12-05 20:08:34.296371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:33.050 /dev/nbd0 00:15:33.050 20:08:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:33.050 20:08:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:33.050 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:33.050 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:33.050 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.050 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.050 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:33.050 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:33.050 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.050 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.050 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.050 1+0 records in 00:15:33.050 1+0 records out 00:15:33.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466095 s, 8.8 MB/s 00:15:33.050 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.051 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:33.051 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.051 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.051 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:33.051 20:08:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.051 20:08:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:33.051 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:33.051 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:33.051 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:38.320 65536+0 records in 00:15:38.320 65536+0 records out 00:15:38.320 33554432 bytes (34 MB, 32 MiB) copied, 5.28856 s, 6.3 MB/s 00:15:38.320 20:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:38.320 20:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:38.320 20:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:38.320 20:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.320 20:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:38.320 20:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.320 20:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:38.579 [2024-12-05 20:08:39.853368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.579 [2024-12-05 20:08:39.897374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.579 20:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.579 "name": "raid_bdev1", 00:15:38.579 "uuid": "05644081-3d0a-471a-ac46-51922bca9d67", 00:15:38.579 "strip_size_kb": 0, 00:15:38.579 "state": "online", 00:15:38.579 "raid_level": "raid1", 00:15:38.579 "superblock": false, 00:15:38.579 "num_base_bdevs": 4, 00:15:38.579 "num_base_bdevs_discovered": 3, 00:15:38.579 "num_base_bdevs_operational": 3, 00:15:38.579 "base_bdevs_list": [ 00:15:38.579 { 00:15:38.579 "name": null, 00:15:38.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.579 "is_configured": false, 00:15:38.579 "data_offset": 0, 00:15:38.579 "data_size": 65536 00:15:38.579 }, 00:15:38.579 { 00:15:38.579 "name": "BaseBdev2", 00:15:38.579 "uuid": "d1f093cd-3c21-5ae0-a38f-ac1186f11da2", 00:15:38.579 "is_configured": true, 00:15:38.579 "data_offset": 0, 00:15:38.579 "data_size": 65536 00:15:38.579 }, 00:15:38.579 { 00:15:38.579 "name": "BaseBdev3", 00:15:38.580 "uuid": "06e9d0fd-d61f-5c20-84da-03196663b03e", 00:15:38.580 "is_configured": true, 00:15:38.580 "data_offset": 0, 00:15:38.580 "data_size": 65536 00:15:38.580 }, 00:15:38.580 { 00:15:38.580 "name": "BaseBdev4", 00:15:38.580 "uuid": "fbcbf4ba-1807-5c05-af6e-50003cd9662b", 00:15:38.580 "is_configured": true, 00:15:38.580 "data_offset": 0, 00:15:38.580 "data_size": 65536 00:15:38.580 } 00:15:38.580 ] 00:15:38.580 }' 00:15:38.580 20:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.580 20:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.148 20:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:39.149 20:08:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.149 20:08:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.149 [2024-12-05 20:08:40.380613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:39.149 [2024-12-05 20:08:40.396102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:15:39.149 20:08:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.149 20:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:39.149 [2024-12-05 20:08:40.398118] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:40.089 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.089 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.089 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.089 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.089 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.089 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.089 20:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.089 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.089 20:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.089 20:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.089 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.089 "name": "raid_bdev1", 00:15:40.089 "uuid": "05644081-3d0a-471a-ac46-51922bca9d67", 00:15:40.089 "strip_size_kb": 0, 00:15:40.089 "state": "online", 00:15:40.089 "raid_level": "raid1", 00:15:40.089 "superblock": false, 00:15:40.089 "num_base_bdevs": 4, 00:15:40.089 "num_base_bdevs_discovered": 4, 00:15:40.089 "num_base_bdevs_operational": 4, 00:15:40.089 "process": { 00:15:40.089 "type": "rebuild", 00:15:40.089 "target": "spare", 00:15:40.089 "progress": { 00:15:40.089 "blocks": 20480, 00:15:40.089 "percent": 31 00:15:40.089 } 00:15:40.089 }, 00:15:40.089 "base_bdevs_list": [ 00:15:40.089 { 00:15:40.089 "name": "spare", 00:15:40.089 "uuid": "89ee322a-deaa-52fd-b87e-fe1c26c16b9c", 00:15:40.089 "is_configured": true, 00:15:40.089 "data_offset": 0, 00:15:40.089 "data_size": 65536 00:15:40.089 }, 00:15:40.089 { 00:15:40.089 "name": "BaseBdev2", 00:15:40.089 "uuid": "d1f093cd-3c21-5ae0-a38f-ac1186f11da2", 00:15:40.089 "is_configured": true, 00:15:40.089 "data_offset": 0, 00:15:40.089 "data_size": 65536 00:15:40.089 }, 00:15:40.089 { 00:15:40.089 "name": "BaseBdev3", 00:15:40.089 "uuid": "06e9d0fd-d61f-5c20-84da-03196663b03e", 00:15:40.089 "is_configured": true, 00:15:40.089 "data_offset": 0, 00:15:40.089 "data_size": 65536 00:15:40.089 }, 00:15:40.089 { 00:15:40.089 "name": "BaseBdev4", 00:15:40.089 "uuid": "fbcbf4ba-1807-5c05-af6e-50003cd9662b", 00:15:40.089 "is_configured": true, 00:15:40.089 "data_offset": 0, 00:15:40.089 "data_size": 65536 00:15:40.089 } 00:15:40.089 ] 00:15:40.089 }' 00:15:40.089 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.089 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.089 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.349 [2024-12-05 20:08:41.561281] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:40.349 [2024-12-05 20:08:41.603324] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:40.349 [2024-12-05 20:08:41.603445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.349 [2024-12-05 20:08:41.603483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:40.349 [2024-12-05 20:08:41.603506] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.349 "name": "raid_bdev1", 00:15:40.349 "uuid": "05644081-3d0a-471a-ac46-51922bca9d67", 00:15:40.349 "strip_size_kb": 0, 00:15:40.349 "state": "online", 00:15:40.349 "raid_level": "raid1", 00:15:40.349 "superblock": false, 00:15:40.349 "num_base_bdevs": 4, 00:15:40.349 "num_base_bdevs_discovered": 3, 00:15:40.349 "num_base_bdevs_operational": 3, 00:15:40.349 "base_bdevs_list": [ 00:15:40.349 { 00:15:40.349 "name": null, 00:15:40.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.349 "is_configured": false, 00:15:40.349 "data_offset": 0, 00:15:40.349 "data_size": 65536 00:15:40.349 }, 00:15:40.349 { 00:15:40.349 "name": "BaseBdev2", 00:15:40.349 "uuid": "d1f093cd-3c21-5ae0-a38f-ac1186f11da2", 00:15:40.349 "is_configured": true, 00:15:40.349 "data_offset": 0, 00:15:40.349 "data_size": 65536 00:15:40.349 }, 00:15:40.349 { 00:15:40.349 "name": "BaseBdev3", 00:15:40.349 "uuid": "06e9d0fd-d61f-5c20-84da-03196663b03e", 00:15:40.349 "is_configured": true, 00:15:40.349 "data_offset": 0, 00:15:40.349 "data_size": 65536 00:15:40.349 }, 00:15:40.349 { 00:15:40.349 "name": "BaseBdev4", 00:15:40.349 "uuid": "fbcbf4ba-1807-5c05-af6e-50003cd9662b", 00:15:40.349 "is_configured": true, 00:15:40.349 "data_offset": 0, 00:15:40.349 "data_size": 65536 00:15:40.349 } 00:15:40.349 ] 00:15:40.349 }' 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.349 20:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.609 20:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:40.609 20:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.609 20:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:40.609 20:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:40.609 20:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.609 20:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.609 20:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.609 20:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.609 20:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.868 20:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.868 20:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.868 "name": "raid_bdev1", 00:15:40.868 "uuid": "05644081-3d0a-471a-ac46-51922bca9d67", 00:15:40.868 "strip_size_kb": 0, 00:15:40.868 "state": "online", 00:15:40.868 "raid_level": "raid1", 00:15:40.868 "superblock": false, 00:15:40.868 "num_base_bdevs": 4, 00:15:40.868 "num_base_bdevs_discovered": 3, 00:15:40.868 "num_base_bdevs_operational": 3, 00:15:40.868 "base_bdevs_list": [ 00:15:40.868 { 00:15:40.868 "name": null, 00:15:40.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.868 "is_configured": false, 00:15:40.868 "data_offset": 0, 00:15:40.868 "data_size": 65536 00:15:40.868 }, 00:15:40.868 { 00:15:40.868 "name": "BaseBdev2", 00:15:40.869 "uuid": "d1f093cd-3c21-5ae0-a38f-ac1186f11da2", 00:15:40.869 "is_configured": true, 00:15:40.869 "data_offset": 0, 00:15:40.869 "data_size": 65536 00:15:40.869 }, 00:15:40.869 { 00:15:40.869 "name": "BaseBdev3", 00:15:40.869 "uuid": "06e9d0fd-d61f-5c20-84da-03196663b03e", 00:15:40.869 "is_configured": true, 00:15:40.869 "data_offset": 0, 00:15:40.869 "data_size": 65536 00:15:40.869 }, 00:15:40.869 { 00:15:40.869 "name": "BaseBdev4", 00:15:40.869 "uuid": "fbcbf4ba-1807-5c05-af6e-50003cd9662b", 00:15:40.869 "is_configured": true, 00:15:40.869 "data_offset": 0, 00:15:40.869 "data_size": 65536 00:15:40.869 } 00:15:40.869 ] 00:15:40.869 }' 00:15:40.869 20:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.869 20:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:40.869 20:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.869 20:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:40.869 20:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:40.869 20:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.869 20:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.869 [2024-12-05 20:08:42.172422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.869 [2024-12-05 20:08:42.186609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:15:40.869 20:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.869 20:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:40.869 [2024-12-05 20:08:42.188448] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:41.805 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.805 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.805 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.805 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.805 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.805 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.805 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.805 20:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.805 20:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.805 20:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.065 "name": "raid_bdev1", 00:15:42.065 "uuid": "05644081-3d0a-471a-ac46-51922bca9d67", 00:15:42.065 "strip_size_kb": 0, 00:15:42.065 "state": "online", 00:15:42.065 "raid_level": "raid1", 00:15:42.065 "superblock": false, 00:15:42.065 "num_base_bdevs": 4, 00:15:42.065 "num_base_bdevs_discovered": 4, 00:15:42.065 "num_base_bdevs_operational": 4, 00:15:42.065 "process": { 00:15:42.065 "type": "rebuild", 00:15:42.065 "target": "spare", 00:15:42.065 "progress": { 00:15:42.065 "blocks": 20480, 00:15:42.065 "percent": 31 00:15:42.065 } 00:15:42.065 }, 00:15:42.065 "base_bdevs_list": [ 00:15:42.065 { 00:15:42.065 "name": "spare", 00:15:42.065 "uuid": "89ee322a-deaa-52fd-b87e-fe1c26c16b9c", 00:15:42.065 "is_configured": true, 00:15:42.065 "data_offset": 0, 00:15:42.065 "data_size": 65536 00:15:42.065 }, 00:15:42.065 { 00:15:42.065 "name": "BaseBdev2", 00:15:42.065 "uuid": "d1f093cd-3c21-5ae0-a38f-ac1186f11da2", 00:15:42.065 "is_configured": true, 00:15:42.065 "data_offset": 0, 00:15:42.065 "data_size": 65536 00:15:42.065 }, 00:15:42.065 { 00:15:42.065 "name": "BaseBdev3", 00:15:42.065 "uuid": "06e9d0fd-d61f-5c20-84da-03196663b03e", 00:15:42.065 "is_configured": true, 00:15:42.065 "data_offset": 0, 00:15:42.065 "data_size": 65536 00:15:42.065 }, 00:15:42.065 { 00:15:42.065 "name": "BaseBdev4", 00:15:42.065 "uuid": "fbcbf4ba-1807-5c05-af6e-50003cd9662b", 00:15:42.065 "is_configured": true, 00:15:42.065 "data_offset": 0, 00:15:42.065 "data_size": 65536 00:15:42.065 } 00:15:42.065 ] 00:15:42.065 }' 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.065 [2024-12-05 20:08:43.327917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:42.065 [2024-12-05 20:08:43.393424] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.065 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.066 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.066 20:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.066 20:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.066 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.066 20:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.066 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.066 "name": "raid_bdev1", 00:15:42.066 "uuid": "05644081-3d0a-471a-ac46-51922bca9d67", 00:15:42.066 "strip_size_kb": 0, 00:15:42.066 "state": "online", 00:15:42.066 "raid_level": "raid1", 00:15:42.066 "superblock": false, 00:15:42.066 "num_base_bdevs": 4, 00:15:42.066 "num_base_bdevs_discovered": 3, 00:15:42.066 "num_base_bdevs_operational": 3, 00:15:42.066 "process": { 00:15:42.066 "type": "rebuild", 00:15:42.066 "target": "spare", 00:15:42.066 "progress": { 00:15:42.066 "blocks": 24576, 00:15:42.066 "percent": 37 00:15:42.066 } 00:15:42.066 }, 00:15:42.066 "base_bdevs_list": [ 00:15:42.066 { 00:15:42.066 "name": "spare", 00:15:42.066 "uuid": "89ee322a-deaa-52fd-b87e-fe1c26c16b9c", 00:15:42.066 "is_configured": true, 00:15:42.066 "data_offset": 0, 00:15:42.066 "data_size": 65536 00:15:42.066 }, 00:15:42.066 { 00:15:42.066 "name": null, 00:15:42.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.066 "is_configured": false, 00:15:42.066 "data_offset": 0, 00:15:42.066 "data_size": 65536 00:15:42.066 }, 00:15:42.066 { 00:15:42.066 "name": "BaseBdev3", 00:15:42.066 "uuid": "06e9d0fd-d61f-5c20-84da-03196663b03e", 00:15:42.066 "is_configured": true, 00:15:42.066 "data_offset": 0, 00:15:42.066 "data_size": 65536 00:15:42.066 }, 00:15:42.066 { 00:15:42.066 "name": "BaseBdev4", 00:15:42.066 "uuid": "fbcbf4ba-1807-5c05-af6e-50003cd9662b", 00:15:42.066 "is_configured": true, 00:15:42.066 "data_offset": 0, 00:15:42.066 "data_size": 65536 00:15:42.066 } 00:15:42.066 ] 00:15:42.066 }' 00:15:42.066 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.066 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.066 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=445 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.325 "name": "raid_bdev1", 00:15:42.325 "uuid": "05644081-3d0a-471a-ac46-51922bca9d67", 00:15:42.325 "strip_size_kb": 0, 00:15:42.325 "state": "online", 00:15:42.325 "raid_level": "raid1", 00:15:42.325 "superblock": false, 00:15:42.325 "num_base_bdevs": 4, 00:15:42.325 "num_base_bdevs_discovered": 3, 00:15:42.325 "num_base_bdevs_operational": 3, 00:15:42.325 "process": { 00:15:42.325 "type": "rebuild", 00:15:42.325 "target": "spare", 00:15:42.325 "progress": { 00:15:42.325 "blocks": 26624, 00:15:42.325 "percent": 40 00:15:42.325 } 00:15:42.325 }, 00:15:42.325 "base_bdevs_list": [ 00:15:42.325 { 00:15:42.325 "name": "spare", 00:15:42.325 "uuid": "89ee322a-deaa-52fd-b87e-fe1c26c16b9c", 00:15:42.325 "is_configured": true, 00:15:42.325 "data_offset": 0, 00:15:42.325 "data_size": 65536 00:15:42.325 }, 00:15:42.325 { 00:15:42.325 "name": null, 00:15:42.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.325 "is_configured": false, 00:15:42.325 "data_offset": 0, 00:15:42.325 "data_size": 65536 00:15:42.325 }, 00:15:42.325 { 00:15:42.325 "name": "BaseBdev3", 00:15:42.325 "uuid": "06e9d0fd-d61f-5c20-84da-03196663b03e", 00:15:42.325 "is_configured": true, 00:15:42.325 "data_offset": 0, 00:15:42.325 "data_size": 65536 00:15:42.325 }, 00:15:42.325 { 00:15:42.325 "name": "BaseBdev4", 00:15:42.325 "uuid": "fbcbf4ba-1807-5c05-af6e-50003cd9662b", 00:15:42.325 "is_configured": true, 00:15:42.325 "data_offset": 0, 00:15:42.325 "data_size": 65536 00:15:42.325 } 00:15:42.325 ] 00:15:42.325 }' 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.325 20:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:43.262 20:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:43.262 20:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.262 20:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.262 20:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.262 20:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.262 20:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.262 20:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.262 20:08:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.262 20:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.262 20:08:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.262 20:08:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.522 20:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.522 "name": "raid_bdev1", 00:15:43.522 "uuid": "05644081-3d0a-471a-ac46-51922bca9d67", 00:15:43.522 "strip_size_kb": 0, 00:15:43.522 "state": "online", 00:15:43.522 "raid_level": "raid1", 00:15:43.522 "superblock": false, 00:15:43.522 "num_base_bdevs": 4, 00:15:43.522 "num_base_bdevs_discovered": 3, 00:15:43.522 "num_base_bdevs_operational": 3, 00:15:43.522 "process": { 00:15:43.522 "type": "rebuild", 00:15:43.522 "target": "spare", 00:15:43.522 "progress": { 00:15:43.522 "blocks": 49152, 00:15:43.522 "percent": 75 00:15:43.522 } 00:15:43.522 }, 00:15:43.522 "base_bdevs_list": [ 00:15:43.522 { 00:15:43.522 "name": "spare", 00:15:43.522 "uuid": "89ee322a-deaa-52fd-b87e-fe1c26c16b9c", 00:15:43.522 "is_configured": true, 00:15:43.522 "data_offset": 0, 00:15:43.522 "data_size": 65536 00:15:43.522 }, 00:15:43.522 { 00:15:43.522 "name": null, 00:15:43.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.522 "is_configured": false, 00:15:43.522 "data_offset": 0, 00:15:43.522 "data_size": 65536 00:15:43.522 }, 00:15:43.522 { 00:15:43.522 "name": "BaseBdev3", 00:15:43.522 "uuid": "06e9d0fd-d61f-5c20-84da-03196663b03e", 00:15:43.522 "is_configured": true, 00:15:43.522 "data_offset": 0, 00:15:43.522 "data_size": 65536 00:15:43.522 }, 00:15:43.522 { 00:15:43.522 "name": "BaseBdev4", 00:15:43.522 "uuid": "fbcbf4ba-1807-5c05-af6e-50003cd9662b", 00:15:43.522 "is_configured": true, 00:15:43.522 "data_offset": 0, 00:15:43.522 "data_size": 65536 00:15:43.522 } 00:15:43.522 ] 00:15:43.522 }' 00:15:43.522 20:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.522 20:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.522 20:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.522 20:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.522 20:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:44.090 [2024-12-05 20:08:45.401545] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:44.090 [2024-12-05 20:08:45.401626] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:44.090 [2024-12-05 20:08:45.401669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.656 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.656 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.656 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.656 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.656 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.656 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.656 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.656 20:08:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.656 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.656 20:08:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.656 20:08:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.656 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.656 "name": "raid_bdev1", 00:15:44.656 "uuid": "05644081-3d0a-471a-ac46-51922bca9d67", 00:15:44.656 "strip_size_kb": 0, 00:15:44.656 "state": "online", 00:15:44.656 "raid_level": "raid1", 00:15:44.656 "superblock": false, 00:15:44.656 "num_base_bdevs": 4, 00:15:44.656 "num_base_bdevs_discovered": 3, 00:15:44.656 "num_base_bdevs_operational": 3, 00:15:44.656 "base_bdevs_list": [ 00:15:44.656 { 00:15:44.656 "name": "spare", 00:15:44.656 "uuid": "89ee322a-deaa-52fd-b87e-fe1c26c16b9c", 00:15:44.656 "is_configured": true, 00:15:44.656 "data_offset": 0, 00:15:44.656 "data_size": 65536 00:15:44.656 }, 00:15:44.656 { 00:15:44.656 "name": null, 00:15:44.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.656 "is_configured": false, 00:15:44.656 "data_offset": 0, 00:15:44.656 "data_size": 65536 00:15:44.656 }, 00:15:44.656 { 00:15:44.656 "name": "BaseBdev3", 00:15:44.656 "uuid": "06e9d0fd-d61f-5c20-84da-03196663b03e", 00:15:44.656 "is_configured": true, 00:15:44.656 "data_offset": 0, 00:15:44.656 "data_size": 65536 00:15:44.656 }, 00:15:44.656 { 00:15:44.656 "name": "BaseBdev4", 00:15:44.656 "uuid": "fbcbf4ba-1807-5c05-af6e-50003cd9662b", 00:15:44.656 "is_configured": true, 00:15:44.656 "data_offset": 0, 00:15:44.656 "data_size": 65536 00:15:44.656 } 00:15:44.656 ] 00:15:44.656 }' 00:15:44.656 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.656 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:44.656 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.656 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:44.656 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:44.657 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.657 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.657 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.657 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.657 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.657 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.657 20:08:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.657 20:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.657 20:08:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.657 20:08:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.657 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.657 "name": "raid_bdev1", 00:15:44.657 "uuid": "05644081-3d0a-471a-ac46-51922bca9d67", 00:15:44.657 "strip_size_kb": 0, 00:15:44.657 "state": "online", 00:15:44.657 "raid_level": "raid1", 00:15:44.657 "superblock": false, 00:15:44.657 "num_base_bdevs": 4, 00:15:44.657 "num_base_bdevs_discovered": 3, 00:15:44.657 "num_base_bdevs_operational": 3, 00:15:44.657 "base_bdevs_list": [ 00:15:44.657 { 00:15:44.657 "name": "spare", 00:15:44.657 "uuid": "89ee322a-deaa-52fd-b87e-fe1c26c16b9c", 00:15:44.657 "is_configured": true, 00:15:44.657 "data_offset": 0, 00:15:44.657 "data_size": 65536 00:15:44.657 }, 00:15:44.657 { 00:15:44.657 "name": null, 00:15:44.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.657 "is_configured": false, 00:15:44.657 "data_offset": 0, 00:15:44.657 "data_size": 65536 00:15:44.657 }, 00:15:44.657 { 00:15:44.657 "name": "BaseBdev3", 00:15:44.657 "uuid": "06e9d0fd-d61f-5c20-84da-03196663b03e", 00:15:44.657 "is_configured": true, 00:15:44.657 "data_offset": 0, 00:15:44.657 "data_size": 65536 00:15:44.657 }, 00:15:44.657 { 00:15:44.657 "name": "BaseBdev4", 00:15:44.657 "uuid": "fbcbf4ba-1807-5c05-af6e-50003cd9662b", 00:15:44.657 "is_configured": true, 00:15:44.657 "data_offset": 0, 00:15:44.657 "data_size": 65536 00:15:44.657 } 00:15:44.657 ] 00:15:44.657 }' 00:15:44.657 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.657 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:44.657 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.914 "name": "raid_bdev1", 00:15:44.914 "uuid": "05644081-3d0a-471a-ac46-51922bca9d67", 00:15:44.914 "strip_size_kb": 0, 00:15:44.914 "state": "online", 00:15:44.914 "raid_level": "raid1", 00:15:44.914 "superblock": false, 00:15:44.914 "num_base_bdevs": 4, 00:15:44.914 "num_base_bdevs_discovered": 3, 00:15:44.914 "num_base_bdevs_operational": 3, 00:15:44.914 "base_bdevs_list": [ 00:15:44.914 { 00:15:44.914 "name": "spare", 00:15:44.914 "uuid": "89ee322a-deaa-52fd-b87e-fe1c26c16b9c", 00:15:44.914 "is_configured": true, 00:15:44.914 "data_offset": 0, 00:15:44.914 "data_size": 65536 00:15:44.914 }, 00:15:44.914 { 00:15:44.914 "name": null, 00:15:44.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.914 "is_configured": false, 00:15:44.914 "data_offset": 0, 00:15:44.914 "data_size": 65536 00:15:44.914 }, 00:15:44.914 { 00:15:44.914 "name": "BaseBdev3", 00:15:44.914 "uuid": "06e9d0fd-d61f-5c20-84da-03196663b03e", 00:15:44.914 "is_configured": true, 00:15:44.914 "data_offset": 0, 00:15:44.914 "data_size": 65536 00:15:44.914 }, 00:15:44.914 { 00:15:44.914 "name": "BaseBdev4", 00:15:44.914 "uuid": "fbcbf4ba-1807-5c05-af6e-50003cd9662b", 00:15:44.914 "is_configured": true, 00:15:44.914 "data_offset": 0, 00:15:44.914 "data_size": 65536 00:15:44.914 } 00:15:44.914 ] 00:15:44.914 }' 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.914 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.172 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:45.172 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.172 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.172 [2024-12-05 20:08:46.552634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:45.172 [2024-12-05 20:08:46.552665] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:45.172 [2024-12-05 20:08:46.552749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:45.172 [2024-12-05 20:08:46.552841] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:45.172 [2024-12-05 20:08:46.552851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:45.172 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.172 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.172 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.172 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.172 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:45.172 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.172 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:45.432 /dev/nbd0 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.432 1+0 records in 00:15:45.432 1+0 records out 00:15:45.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378823 s, 10.8 MB/s 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:45.432 20:08:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:45.691 /dev/nbd1 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.691 1+0 records in 00:15:45.691 1+0 records out 00:15:45.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360022 s, 11.4 MB/s 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:45.691 20:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:45.950 20:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:45.950 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:45.950 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:45.950 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:45.950 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:45.950 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.950 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:46.210 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:46.210 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:46.210 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:46.210 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.210 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.210 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:46.210 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:46.210 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.210 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.210 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77646 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77646 ']' 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77646 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77646 00:15:46.470 killing process with pid 77646 00:15:46.470 Received shutdown signal, test time was about 60.000000 seconds 00:15:46.470 00:15:46.470 Latency(us) 00:15:46.470 [2024-12-05T20:08:47.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.470 [2024-12-05T20:08:47.907Z] =================================================================================================================== 00:15:46.470 [2024-12-05T20:08:47.907Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77646' 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77646 00:15:46.470 [2024-12-05 20:08:47.747873] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.470 20:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77646 00:15:47.039 [2024-12-05 20:08:48.216897] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:47.977 00:15:47.977 real 0m16.994s 00:15:47.977 user 0m19.423s 00:15:47.977 sys 0m2.869s 00:15:47.977 ************************************ 00:15:47.977 END TEST raid_rebuild_test 00:15:47.977 ************************************ 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.977 20:08:49 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:15:47.977 20:08:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:47.977 20:08:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.977 20:08:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:47.977 ************************************ 00:15:47.977 START TEST raid_rebuild_test_sb 00:15:47.977 ************************************ 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78093 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78093 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78093 ']' 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.977 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.236 [2024-12-05 20:08:49.447253] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:15:48.236 [2024-12-05 20:08:49.447458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:48.236 Zero copy mechanism will not be used. 00:15:48.236 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78093 ] 00:15:48.236 [2024-12-05 20:08:49.619000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.495 [2024-12-05 20:08:49.723210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.495 [2024-12-05 20:08:49.912140] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.495 [2024-12-05 20:08:49.912260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.063 BaseBdev1_malloc 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.063 [2024-12-05 20:08:50.321750] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:49.063 [2024-12-05 20:08:50.321820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.063 [2024-12-05 20:08:50.321857] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:49.063 [2024-12-05 20:08:50.321868] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.063 [2024-12-05 20:08:50.323846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.063 [2024-12-05 20:08:50.323893] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:49.063 BaseBdev1 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.063 BaseBdev2_malloc 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.063 [2024-12-05 20:08:50.376770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:49.063 [2024-12-05 20:08:50.376833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.063 [2024-12-05 20:08:50.376856] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:49.063 [2024-12-05 20:08:50.376867] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.063 [2024-12-05 20:08:50.378948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.063 [2024-12-05 20:08:50.378985] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:49.063 BaseBdev2 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.063 BaseBdev3_malloc 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.063 [2024-12-05 20:08:50.447732] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:49.063 [2024-12-05 20:08:50.447848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.063 [2024-12-05 20:08:50.447899] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:49.063 [2024-12-05 20:08:50.447937] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.063 [2024-12-05 20:08:50.450194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.063 [2024-12-05 20:08:50.450284] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:49.063 BaseBdev3 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.063 BaseBdev4_malloc 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.063 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.323 [2024-12-05 20:08:50.502321] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:49.323 [2024-12-05 20:08:50.502382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.323 [2024-12-05 20:08:50.502401] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:49.323 [2024-12-05 20:08:50.502412] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.323 [2024-12-05 20:08:50.504351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.323 [2024-12-05 20:08:50.504436] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:49.323 BaseBdev4 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.323 spare_malloc 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.323 spare_delay 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.323 [2024-12-05 20:08:50.566470] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:49.323 [2024-12-05 20:08:50.566520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.323 [2024-12-05 20:08:50.566535] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:49.323 [2024-12-05 20:08:50.566545] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.323 [2024-12-05 20:08:50.568533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.323 [2024-12-05 20:08:50.568572] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:49.323 spare 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.323 [2024-12-05 20:08:50.578499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.323 [2024-12-05 20:08:50.580225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.323 [2024-12-05 20:08:50.580287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:49.323 [2024-12-05 20:08:50.580336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:49.323 [2024-12-05 20:08:50.580516] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:49.323 [2024-12-05 20:08:50.580531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:49.323 [2024-12-05 20:08:50.580809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:49.323 [2024-12-05 20:08:50.581010] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:49.323 [2024-12-05 20:08:50.581022] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:49.323 [2024-12-05 20:08:50.581175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.323 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.323 "name": "raid_bdev1", 00:15:49.323 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:15:49.323 "strip_size_kb": 0, 00:15:49.323 "state": "online", 00:15:49.323 "raid_level": "raid1", 00:15:49.323 "superblock": true, 00:15:49.323 "num_base_bdevs": 4, 00:15:49.323 "num_base_bdevs_discovered": 4, 00:15:49.323 "num_base_bdevs_operational": 4, 00:15:49.323 "base_bdevs_list": [ 00:15:49.323 { 00:15:49.323 "name": "BaseBdev1", 00:15:49.323 "uuid": "759b04ad-6f9c-5c34-893c-23f7d83d0c79", 00:15:49.323 "is_configured": true, 00:15:49.323 "data_offset": 2048, 00:15:49.323 "data_size": 63488 00:15:49.323 }, 00:15:49.323 { 00:15:49.323 "name": "BaseBdev2", 00:15:49.323 "uuid": "76482de1-d9d2-5dbb-a2ba-7f9b90b782c9", 00:15:49.323 "is_configured": true, 00:15:49.323 "data_offset": 2048, 00:15:49.323 "data_size": 63488 00:15:49.323 }, 00:15:49.323 { 00:15:49.323 "name": "BaseBdev3", 00:15:49.323 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:15:49.323 "is_configured": true, 00:15:49.324 "data_offset": 2048, 00:15:49.324 "data_size": 63488 00:15:49.324 }, 00:15:49.324 { 00:15:49.324 "name": "BaseBdev4", 00:15:49.324 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:15:49.324 "is_configured": true, 00:15:49.324 "data_offset": 2048, 00:15:49.324 "data_size": 63488 00:15:49.324 } 00:15:49.324 ] 00:15:49.324 }' 00:15:49.324 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.324 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.893 [2024-12-05 20:08:51.074056] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:49.893 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:49.894 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:49.894 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:49.894 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:49.894 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:49.894 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:49.894 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:50.153 [2024-12-05 20:08:51.333261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:50.153 /dev/nbd0 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.153 1+0 records in 00:15:50.153 1+0 records out 00:15:50.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005447 s, 7.5 MB/s 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:50.153 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:55.425 63488+0 records in 00:15:55.425 63488+0 records out 00:15:55.426 32505856 bytes (33 MB, 31 MiB) copied, 5.15469 s, 6.3 MB/s 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:55.426 [2024-12-05 20:08:56.763003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.426 [2024-12-05 20:08:56.779071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.426 "name": "raid_bdev1", 00:15:55.426 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:15:55.426 "strip_size_kb": 0, 00:15:55.426 "state": "online", 00:15:55.426 "raid_level": "raid1", 00:15:55.426 "superblock": true, 00:15:55.426 "num_base_bdevs": 4, 00:15:55.426 "num_base_bdevs_discovered": 3, 00:15:55.426 "num_base_bdevs_operational": 3, 00:15:55.426 "base_bdevs_list": [ 00:15:55.426 { 00:15:55.426 "name": null, 00:15:55.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.426 "is_configured": false, 00:15:55.426 "data_offset": 0, 00:15:55.426 "data_size": 63488 00:15:55.426 }, 00:15:55.426 { 00:15:55.426 "name": "BaseBdev2", 00:15:55.426 "uuid": "76482de1-d9d2-5dbb-a2ba-7f9b90b782c9", 00:15:55.426 "is_configured": true, 00:15:55.426 "data_offset": 2048, 00:15:55.426 "data_size": 63488 00:15:55.426 }, 00:15:55.426 { 00:15:55.426 "name": "BaseBdev3", 00:15:55.426 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:15:55.426 "is_configured": true, 00:15:55.426 "data_offset": 2048, 00:15:55.426 "data_size": 63488 00:15:55.426 }, 00:15:55.426 { 00:15:55.426 "name": "BaseBdev4", 00:15:55.426 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:15:55.426 "is_configured": true, 00:15:55.426 "data_offset": 2048, 00:15:55.426 "data_size": 63488 00:15:55.426 } 00:15:55.426 ] 00:15:55.426 }' 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.426 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.991 20:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:55.991 20:08:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.991 20:08:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.991 [2024-12-05 20:08:57.250264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:55.991 [2024-12-05 20:08:57.265587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:15:55.991 20:08:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.991 20:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:55.991 [2024-12-05 20:08:57.267542] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:56.927 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.927 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.927 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.927 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.927 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.927 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.927 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.927 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.927 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.927 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.927 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.927 "name": "raid_bdev1", 00:15:56.927 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:15:56.927 "strip_size_kb": 0, 00:15:56.927 "state": "online", 00:15:56.927 "raid_level": "raid1", 00:15:56.927 "superblock": true, 00:15:56.927 "num_base_bdevs": 4, 00:15:56.927 "num_base_bdevs_discovered": 4, 00:15:56.927 "num_base_bdevs_operational": 4, 00:15:56.927 "process": { 00:15:56.927 "type": "rebuild", 00:15:56.927 "target": "spare", 00:15:56.927 "progress": { 00:15:56.927 "blocks": 20480, 00:15:56.927 "percent": 32 00:15:56.927 } 00:15:56.927 }, 00:15:56.927 "base_bdevs_list": [ 00:15:56.927 { 00:15:56.927 "name": "spare", 00:15:56.927 "uuid": "d2808fd8-0e54-5714-8efc-fbca946661e1", 00:15:56.927 "is_configured": true, 00:15:56.927 "data_offset": 2048, 00:15:56.927 "data_size": 63488 00:15:56.927 }, 00:15:56.927 { 00:15:56.927 "name": "BaseBdev2", 00:15:56.927 "uuid": "76482de1-d9d2-5dbb-a2ba-7f9b90b782c9", 00:15:56.927 "is_configured": true, 00:15:56.927 "data_offset": 2048, 00:15:56.927 "data_size": 63488 00:15:56.927 }, 00:15:56.927 { 00:15:56.927 "name": "BaseBdev3", 00:15:56.927 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:15:56.927 "is_configured": true, 00:15:56.927 "data_offset": 2048, 00:15:56.927 "data_size": 63488 00:15:56.927 }, 00:15:56.927 { 00:15:56.927 "name": "BaseBdev4", 00:15:56.927 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:15:56.927 "is_configured": true, 00:15:56.927 "data_offset": 2048, 00:15:56.927 "data_size": 63488 00:15:56.927 } 00:15:56.927 ] 00:15:56.927 }' 00:15:56.927 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.186 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.186 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.186 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.186 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:57.186 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.186 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.186 [2024-12-05 20:08:58.423124] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.187 [2024-12-05 20:08:58.472915] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:57.187 [2024-12-05 20:08:58.472999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.187 [2024-12-05 20:08:58.473019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.187 [2024-12-05 20:08:58.473030] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.187 "name": "raid_bdev1", 00:15:57.187 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:15:57.187 "strip_size_kb": 0, 00:15:57.187 "state": "online", 00:15:57.187 "raid_level": "raid1", 00:15:57.187 "superblock": true, 00:15:57.187 "num_base_bdevs": 4, 00:15:57.187 "num_base_bdevs_discovered": 3, 00:15:57.187 "num_base_bdevs_operational": 3, 00:15:57.187 "base_bdevs_list": [ 00:15:57.187 { 00:15:57.187 "name": null, 00:15:57.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.187 "is_configured": false, 00:15:57.187 "data_offset": 0, 00:15:57.187 "data_size": 63488 00:15:57.187 }, 00:15:57.187 { 00:15:57.187 "name": "BaseBdev2", 00:15:57.187 "uuid": "76482de1-d9d2-5dbb-a2ba-7f9b90b782c9", 00:15:57.187 "is_configured": true, 00:15:57.187 "data_offset": 2048, 00:15:57.187 "data_size": 63488 00:15:57.187 }, 00:15:57.187 { 00:15:57.187 "name": "BaseBdev3", 00:15:57.187 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:15:57.187 "is_configured": true, 00:15:57.187 "data_offset": 2048, 00:15:57.187 "data_size": 63488 00:15:57.187 }, 00:15:57.187 { 00:15:57.187 "name": "BaseBdev4", 00:15:57.187 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:15:57.187 "is_configured": true, 00:15:57.187 "data_offset": 2048, 00:15:57.187 "data_size": 63488 00:15:57.187 } 00:15:57.187 ] 00:15:57.187 }' 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.187 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.756 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.756 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.756 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.756 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.756 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.756 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.756 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.756 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.756 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.756 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.756 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.756 "name": "raid_bdev1", 00:15:57.756 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:15:57.756 "strip_size_kb": 0, 00:15:57.756 "state": "online", 00:15:57.756 "raid_level": "raid1", 00:15:57.756 "superblock": true, 00:15:57.756 "num_base_bdevs": 4, 00:15:57.756 "num_base_bdevs_discovered": 3, 00:15:57.756 "num_base_bdevs_operational": 3, 00:15:57.756 "base_bdevs_list": [ 00:15:57.756 { 00:15:57.756 "name": null, 00:15:57.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.756 "is_configured": false, 00:15:57.756 "data_offset": 0, 00:15:57.756 "data_size": 63488 00:15:57.756 }, 00:15:57.756 { 00:15:57.756 "name": "BaseBdev2", 00:15:57.756 "uuid": "76482de1-d9d2-5dbb-a2ba-7f9b90b782c9", 00:15:57.757 "is_configured": true, 00:15:57.757 "data_offset": 2048, 00:15:57.757 "data_size": 63488 00:15:57.757 }, 00:15:57.757 { 00:15:57.757 "name": "BaseBdev3", 00:15:57.757 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:15:57.757 "is_configured": true, 00:15:57.757 "data_offset": 2048, 00:15:57.757 "data_size": 63488 00:15:57.757 }, 00:15:57.757 { 00:15:57.757 "name": "BaseBdev4", 00:15:57.757 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:15:57.757 "is_configured": true, 00:15:57.757 "data_offset": 2048, 00:15:57.757 "data_size": 63488 00:15:57.757 } 00:15:57.757 ] 00:15:57.757 }' 00:15:57.757 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.757 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.757 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.757 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.757 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.757 20:08:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.757 20:08:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.757 [2024-12-05 20:08:59.053894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.757 [2024-12-05 20:08:59.068167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:15:57.757 20:08:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.757 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:57.757 [2024-12-05 20:08:59.070267] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:58.698 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.698 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.698 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.698 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.698 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.698 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.698 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.698 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.698 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.698 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.698 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.698 "name": "raid_bdev1", 00:15:58.698 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:15:58.698 "strip_size_kb": 0, 00:15:58.698 "state": "online", 00:15:58.698 "raid_level": "raid1", 00:15:58.698 "superblock": true, 00:15:58.698 "num_base_bdevs": 4, 00:15:58.698 "num_base_bdevs_discovered": 4, 00:15:58.698 "num_base_bdevs_operational": 4, 00:15:58.698 "process": { 00:15:58.698 "type": "rebuild", 00:15:58.698 "target": "spare", 00:15:58.698 "progress": { 00:15:58.698 "blocks": 20480, 00:15:58.698 "percent": 32 00:15:58.698 } 00:15:58.698 }, 00:15:58.698 "base_bdevs_list": [ 00:15:58.698 { 00:15:58.698 "name": "spare", 00:15:58.698 "uuid": "d2808fd8-0e54-5714-8efc-fbca946661e1", 00:15:58.698 "is_configured": true, 00:15:58.698 "data_offset": 2048, 00:15:58.698 "data_size": 63488 00:15:58.698 }, 00:15:58.698 { 00:15:58.698 "name": "BaseBdev2", 00:15:58.698 "uuid": "76482de1-d9d2-5dbb-a2ba-7f9b90b782c9", 00:15:58.698 "is_configured": true, 00:15:58.698 "data_offset": 2048, 00:15:58.698 "data_size": 63488 00:15:58.698 }, 00:15:58.698 { 00:15:58.698 "name": "BaseBdev3", 00:15:58.698 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:15:58.698 "is_configured": true, 00:15:58.698 "data_offset": 2048, 00:15:58.698 "data_size": 63488 00:15:58.698 }, 00:15:58.698 { 00:15:58.698 "name": "BaseBdev4", 00:15:58.698 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:15:58.698 "is_configured": true, 00:15:58.698 "data_offset": 2048, 00:15:58.698 "data_size": 63488 00:15:58.698 } 00:15:58.698 ] 00:15:58.698 }' 00:15:58.698 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:58.957 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.957 [2024-12-05 20:09:00.237581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:58.957 [2024-12-05 20:09:00.375547] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.957 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.216 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.216 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.216 "name": "raid_bdev1", 00:15:59.216 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:15:59.216 "strip_size_kb": 0, 00:15:59.216 "state": "online", 00:15:59.216 "raid_level": "raid1", 00:15:59.216 "superblock": true, 00:15:59.216 "num_base_bdevs": 4, 00:15:59.216 "num_base_bdevs_discovered": 3, 00:15:59.216 "num_base_bdevs_operational": 3, 00:15:59.216 "process": { 00:15:59.216 "type": "rebuild", 00:15:59.216 "target": "spare", 00:15:59.216 "progress": { 00:15:59.217 "blocks": 24576, 00:15:59.217 "percent": 38 00:15:59.217 } 00:15:59.217 }, 00:15:59.217 "base_bdevs_list": [ 00:15:59.217 { 00:15:59.217 "name": "spare", 00:15:59.217 "uuid": "d2808fd8-0e54-5714-8efc-fbca946661e1", 00:15:59.217 "is_configured": true, 00:15:59.217 "data_offset": 2048, 00:15:59.217 "data_size": 63488 00:15:59.217 }, 00:15:59.217 { 00:15:59.217 "name": null, 00:15:59.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.217 "is_configured": false, 00:15:59.217 "data_offset": 0, 00:15:59.217 "data_size": 63488 00:15:59.217 }, 00:15:59.217 { 00:15:59.217 "name": "BaseBdev3", 00:15:59.217 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:15:59.217 "is_configured": true, 00:15:59.217 "data_offset": 2048, 00:15:59.217 "data_size": 63488 00:15:59.217 }, 00:15:59.217 { 00:15:59.217 "name": "BaseBdev4", 00:15:59.217 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:15:59.217 "is_configured": true, 00:15:59.217 "data_offset": 2048, 00:15:59.217 "data_size": 63488 00:15:59.217 } 00:15:59.217 ] 00:15:59.217 }' 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=462 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.217 "name": "raid_bdev1", 00:15:59.217 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:15:59.217 "strip_size_kb": 0, 00:15:59.217 "state": "online", 00:15:59.217 "raid_level": "raid1", 00:15:59.217 "superblock": true, 00:15:59.217 "num_base_bdevs": 4, 00:15:59.217 "num_base_bdevs_discovered": 3, 00:15:59.217 "num_base_bdevs_operational": 3, 00:15:59.217 "process": { 00:15:59.217 "type": "rebuild", 00:15:59.217 "target": "spare", 00:15:59.217 "progress": { 00:15:59.217 "blocks": 26624, 00:15:59.217 "percent": 41 00:15:59.217 } 00:15:59.217 }, 00:15:59.217 "base_bdevs_list": [ 00:15:59.217 { 00:15:59.217 "name": "spare", 00:15:59.217 "uuid": "d2808fd8-0e54-5714-8efc-fbca946661e1", 00:15:59.217 "is_configured": true, 00:15:59.217 "data_offset": 2048, 00:15:59.217 "data_size": 63488 00:15:59.217 }, 00:15:59.217 { 00:15:59.217 "name": null, 00:15:59.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.217 "is_configured": false, 00:15:59.217 "data_offset": 0, 00:15:59.217 "data_size": 63488 00:15:59.217 }, 00:15:59.217 { 00:15:59.217 "name": "BaseBdev3", 00:15:59.217 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:15:59.217 "is_configured": true, 00:15:59.217 "data_offset": 2048, 00:15:59.217 "data_size": 63488 00:15:59.217 }, 00:15:59.217 { 00:15:59.217 "name": "BaseBdev4", 00:15:59.217 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:15:59.217 "is_configured": true, 00:15:59.217 "data_offset": 2048, 00:15:59.217 "data_size": 63488 00:15:59.217 } 00:15:59.217 ] 00:15:59.217 }' 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.217 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.618 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.618 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.618 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.618 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.618 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.618 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.618 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.618 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.618 20:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.618 20:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.618 20:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.618 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.618 "name": "raid_bdev1", 00:16:00.618 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:16:00.618 "strip_size_kb": 0, 00:16:00.618 "state": "online", 00:16:00.618 "raid_level": "raid1", 00:16:00.618 "superblock": true, 00:16:00.619 "num_base_bdevs": 4, 00:16:00.619 "num_base_bdevs_discovered": 3, 00:16:00.619 "num_base_bdevs_operational": 3, 00:16:00.619 "process": { 00:16:00.619 "type": "rebuild", 00:16:00.619 "target": "spare", 00:16:00.619 "progress": { 00:16:00.619 "blocks": 49152, 00:16:00.619 "percent": 77 00:16:00.619 } 00:16:00.619 }, 00:16:00.619 "base_bdevs_list": [ 00:16:00.619 { 00:16:00.619 "name": "spare", 00:16:00.619 "uuid": "d2808fd8-0e54-5714-8efc-fbca946661e1", 00:16:00.619 "is_configured": true, 00:16:00.619 "data_offset": 2048, 00:16:00.619 "data_size": 63488 00:16:00.619 }, 00:16:00.619 { 00:16:00.619 "name": null, 00:16:00.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.619 "is_configured": false, 00:16:00.619 "data_offset": 0, 00:16:00.619 "data_size": 63488 00:16:00.619 }, 00:16:00.619 { 00:16:00.619 "name": "BaseBdev3", 00:16:00.619 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:16:00.619 "is_configured": true, 00:16:00.619 "data_offset": 2048, 00:16:00.619 "data_size": 63488 00:16:00.619 }, 00:16:00.619 { 00:16:00.619 "name": "BaseBdev4", 00:16:00.619 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:16:00.619 "is_configured": true, 00:16:00.619 "data_offset": 2048, 00:16:00.619 "data_size": 63488 00:16:00.619 } 00:16:00.619 ] 00:16:00.619 }' 00:16:00.619 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.619 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.619 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.619 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.619 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.878 [2024-12-05 20:09:02.284334] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:00.878 [2024-12-05 20:09:02.284516] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:00.878 [2024-12-05 20:09:02.284735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.503 "name": "raid_bdev1", 00:16:01.503 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:16:01.503 "strip_size_kb": 0, 00:16:01.503 "state": "online", 00:16:01.503 "raid_level": "raid1", 00:16:01.503 "superblock": true, 00:16:01.503 "num_base_bdevs": 4, 00:16:01.503 "num_base_bdevs_discovered": 3, 00:16:01.503 "num_base_bdevs_operational": 3, 00:16:01.503 "base_bdevs_list": [ 00:16:01.503 { 00:16:01.503 "name": "spare", 00:16:01.503 "uuid": "d2808fd8-0e54-5714-8efc-fbca946661e1", 00:16:01.503 "is_configured": true, 00:16:01.503 "data_offset": 2048, 00:16:01.503 "data_size": 63488 00:16:01.503 }, 00:16:01.503 { 00:16:01.503 "name": null, 00:16:01.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.503 "is_configured": false, 00:16:01.503 "data_offset": 0, 00:16:01.503 "data_size": 63488 00:16:01.503 }, 00:16:01.503 { 00:16:01.503 "name": "BaseBdev3", 00:16:01.503 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:16:01.503 "is_configured": true, 00:16:01.503 "data_offset": 2048, 00:16:01.503 "data_size": 63488 00:16:01.503 }, 00:16:01.503 { 00:16:01.503 "name": "BaseBdev4", 00:16:01.503 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:16:01.503 "is_configured": true, 00:16:01.503 "data_offset": 2048, 00:16:01.503 "data_size": 63488 00:16:01.503 } 00:16:01.503 ] 00:16:01.503 }' 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:01.503 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.761 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.761 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.761 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.761 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.761 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.761 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.761 "name": "raid_bdev1", 00:16:01.761 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:16:01.761 "strip_size_kb": 0, 00:16:01.761 "state": "online", 00:16:01.761 "raid_level": "raid1", 00:16:01.761 "superblock": true, 00:16:01.761 "num_base_bdevs": 4, 00:16:01.761 "num_base_bdevs_discovered": 3, 00:16:01.761 "num_base_bdevs_operational": 3, 00:16:01.761 "base_bdevs_list": [ 00:16:01.761 { 00:16:01.761 "name": "spare", 00:16:01.761 "uuid": "d2808fd8-0e54-5714-8efc-fbca946661e1", 00:16:01.761 "is_configured": true, 00:16:01.761 "data_offset": 2048, 00:16:01.761 "data_size": 63488 00:16:01.761 }, 00:16:01.761 { 00:16:01.761 "name": null, 00:16:01.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.761 "is_configured": false, 00:16:01.761 "data_offset": 0, 00:16:01.761 "data_size": 63488 00:16:01.761 }, 00:16:01.761 { 00:16:01.761 "name": "BaseBdev3", 00:16:01.761 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:16:01.761 "is_configured": true, 00:16:01.761 "data_offset": 2048, 00:16:01.761 "data_size": 63488 00:16:01.761 }, 00:16:01.761 { 00:16:01.761 "name": "BaseBdev4", 00:16:01.761 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:16:01.761 "is_configured": true, 00:16:01.761 "data_offset": 2048, 00:16:01.761 "data_size": 63488 00:16:01.761 } 00:16:01.761 ] 00:16:01.761 }' 00:16:01.761 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.761 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.761 "name": "raid_bdev1", 00:16:01.761 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:16:01.761 "strip_size_kb": 0, 00:16:01.761 "state": "online", 00:16:01.761 "raid_level": "raid1", 00:16:01.761 "superblock": true, 00:16:01.761 "num_base_bdevs": 4, 00:16:01.761 "num_base_bdevs_discovered": 3, 00:16:01.761 "num_base_bdevs_operational": 3, 00:16:01.761 "base_bdevs_list": [ 00:16:01.761 { 00:16:01.761 "name": "spare", 00:16:01.761 "uuid": "d2808fd8-0e54-5714-8efc-fbca946661e1", 00:16:01.761 "is_configured": true, 00:16:01.761 "data_offset": 2048, 00:16:01.761 "data_size": 63488 00:16:01.761 }, 00:16:01.761 { 00:16:01.761 "name": null, 00:16:01.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.761 "is_configured": false, 00:16:01.761 "data_offset": 0, 00:16:01.761 "data_size": 63488 00:16:01.761 }, 00:16:01.761 { 00:16:01.761 "name": "BaseBdev3", 00:16:01.761 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:16:01.761 "is_configured": true, 00:16:01.761 "data_offset": 2048, 00:16:01.761 "data_size": 63488 00:16:01.761 }, 00:16:01.761 { 00:16:01.761 "name": "BaseBdev4", 00:16:01.761 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:16:01.761 "is_configured": true, 00:16:01.761 "data_offset": 2048, 00:16:01.762 "data_size": 63488 00:16:01.762 } 00:16:01.762 ] 00:16:01.762 }' 00:16:01.762 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.762 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.020 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:02.020 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.020 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.020 [2024-12-05 20:09:03.440749] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.020 [2024-12-05 20:09:03.440828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:02.020 [2024-12-05 20:09:03.440957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.020 [2024-12-05 20:09:03.441069] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.020 [2024-12-05 20:09:03.441122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:02.020 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.020 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.020 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:02.020 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.020 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.278 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.278 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:02.278 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:02.278 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:02.279 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:02.279 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.279 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:02.279 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:02.279 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:02.279 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:02.279 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:02.279 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:02.279 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:02.279 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:02.279 /dev/nbd0 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.538 1+0 records in 00:16:02.538 1+0 records out 00:16:02.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369681 s, 11.1 MB/s 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:02.538 /dev/nbd1 00:16:02.538 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:02.797 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:02.798 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:02.798 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:02.798 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.798 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.798 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:02.798 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:02.798 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.798 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.798 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.798 1+0 records in 00:16:02.798 1+0 records out 00:16:02.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243518 s, 16.8 MB/s 00:16:02.798 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.798 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:02.798 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.798 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.798 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:02.798 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.798 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:02.798 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:02.798 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:02.798 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.798 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:02.798 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:02.798 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:02.798 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.798 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:03.057 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:03.057 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:03.057 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:03.057 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.057 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.057 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:03.057 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:03.057 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.057 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.057 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.317 [2024-12-05 20:09:04.610993] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:03.317 [2024-12-05 20:09:04.611088] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.317 [2024-12-05 20:09:04.611114] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:03.317 [2024-12-05 20:09:04.611123] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.317 [2024-12-05 20:09:04.613340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.317 [2024-12-05 20:09:04.613414] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:03.317 [2024-12-05 20:09:04.613539] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:03.317 [2024-12-05 20:09:04.613615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.317 [2024-12-05 20:09:04.613796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:03.317 [2024-12-05 20:09:04.613941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:03.317 spare 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.317 [2024-12-05 20:09:04.713870] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:03.317 [2024-12-05 20:09:04.713903] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:03.317 [2024-12-05 20:09:04.714206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:03.317 [2024-12-05 20:09:04.714384] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:03.317 [2024-12-05 20:09:04.714396] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:03.317 [2024-12-05 20:09:04.714559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.317 20:09:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.578 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.578 "name": "raid_bdev1", 00:16:03.578 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:16:03.578 "strip_size_kb": 0, 00:16:03.578 "state": "online", 00:16:03.578 "raid_level": "raid1", 00:16:03.578 "superblock": true, 00:16:03.578 "num_base_bdevs": 4, 00:16:03.578 "num_base_bdevs_discovered": 3, 00:16:03.578 "num_base_bdevs_operational": 3, 00:16:03.578 "base_bdevs_list": [ 00:16:03.578 { 00:16:03.578 "name": "spare", 00:16:03.578 "uuid": "d2808fd8-0e54-5714-8efc-fbca946661e1", 00:16:03.578 "is_configured": true, 00:16:03.578 "data_offset": 2048, 00:16:03.578 "data_size": 63488 00:16:03.578 }, 00:16:03.578 { 00:16:03.578 "name": null, 00:16:03.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.578 "is_configured": false, 00:16:03.578 "data_offset": 2048, 00:16:03.578 "data_size": 63488 00:16:03.578 }, 00:16:03.578 { 00:16:03.578 "name": "BaseBdev3", 00:16:03.578 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:16:03.578 "is_configured": true, 00:16:03.578 "data_offset": 2048, 00:16:03.578 "data_size": 63488 00:16:03.578 }, 00:16:03.578 { 00:16:03.578 "name": "BaseBdev4", 00:16:03.578 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:16:03.578 "is_configured": true, 00:16:03.578 "data_offset": 2048, 00:16:03.578 "data_size": 63488 00:16:03.578 } 00:16:03.578 ] 00:16:03.578 }' 00:16:03.578 20:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.578 20:09:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.838 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:03.838 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.838 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:03.838 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:03.838 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.838 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.838 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.838 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.838 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.838 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.838 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.838 "name": "raid_bdev1", 00:16:03.838 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:16:03.838 "strip_size_kb": 0, 00:16:03.838 "state": "online", 00:16:03.838 "raid_level": "raid1", 00:16:03.838 "superblock": true, 00:16:03.838 "num_base_bdevs": 4, 00:16:03.838 "num_base_bdevs_discovered": 3, 00:16:03.838 "num_base_bdevs_operational": 3, 00:16:03.838 "base_bdevs_list": [ 00:16:03.838 { 00:16:03.838 "name": "spare", 00:16:03.838 "uuid": "d2808fd8-0e54-5714-8efc-fbca946661e1", 00:16:03.838 "is_configured": true, 00:16:03.838 "data_offset": 2048, 00:16:03.838 "data_size": 63488 00:16:03.838 }, 00:16:03.838 { 00:16:03.838 "name": null, 00:16:03.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.838 "is_configured": false, 00:16:03.838 "data_offset": 2048, 00:16:03.838 "data_size": 63488 00:16:03.838 }, 00:16:03.838 { 00:16:03.838 "name": "BaseBdev3", 00:16:03.838 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:16:03.838 "is_configured": true, 00:16:03.838 "data_offset": 2048, 00:16:03.838 "data_size": 63488 00:16:03.838 }, 00:16:03.838 { 00:16:03.838 "name": "BaseBdev4", 00:16:03.838 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:16:03.838 "is_configured": true, 00:16:03.838 "data_offset": 2048, 00:16:03.838 "data_size": 63488 00:16:03.838 } 00:16:03.838 ] 00:16:03.838 }' 00:16:03.838 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.098 [2024-12-05 20:09:05.365751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.098 "name": "raid_bdev1", 00:16:04.098 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:16:04.098 "strip_size_kb": 0, 00:16:04.098 "state": "online", 00:16:04.098 "raid_level": "raid1", 00:16:04.098 "superblock": true, 00:16:04.098 "num_base_bdevs": 4, 00:16:04.098 "num_base_bdevs_discovered": 2, 00:16:04.098 "num_base_bdevs_operational": 2, 00:16:04.098 "base_bdevs_list": [ 00:16:04.098 { 00:16:04.098 "name": null, 00:16:04.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.098 "is_configured": false, 00:16:04.098 "data_offset": 0, 00:16:04.098 "data_size": 63488 00:16:04.098 }, 00:16:04.098 { 00:16:04.098 "name": null, 00:16:04.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.098 "is_configured": false, 00:16:04.098 "data_offset": 2048, 00:16:04.098 "data_size": 63488 00:16:04.098 }, 00:16:04.098 { 00:16:04.098 "name": "BaseBdev3", 00:16:04.098 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:16:04.098 "is_configured": true, 00:16:04.098 "data_offset": 2048, 00:16:04.098 "data_size": 63488 00:16:04.098 }, 00:16:04.098 { 00:16:04.098 "name": "BaseBdev4", 00:16:04.098 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:16:04.098 "is_configured": true, 00:16:04.098 "data_offset": 2048, 00:16:04.098 "data_size": 63488 00:16:04.098 } 00:16:04.098 ] 00:16:04.098 }' 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.098 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.357 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:04.357 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.357 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.358 [2024-12-05 20:09:05.781047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:04.358 [2024-12-05 20:09:05.781251] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:04.358 [2024-12-05 20:09:05.781267] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:04.358 [2024-12-05 20:09:05.781307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:04.618 [2024-12-05 20:09:05.795989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:16:04.618 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.618 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:04.618 [2024-12-05 20:09:05.797995] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.556 "name": "raid_bdev1", 00:16:05.556 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:16:05.556 "strip_size_kb": 0, 00:16:05.556 "state": "online", 00:16:05.556 "raid_level": "raid1", 00:16:05.556 "superblock": true, 00:16:05.556 "num_base_bdevs": 4, 00:16:05.556 "num_base_bdevs_discovered": 3, 00:16:05.556 "num_base_bdevs_operational": 3, 00:16:05.556 "process": { 00:16:05.556 "type": "rebuild", 00:16:05.556 "target": "spare", 00:16:05.556 "progress": { 00:16:05.556 "blocks": 20480, 00:16:05.556 "percent": 32 00:16:05.556 } 00:16:05.556 }, 00:16:05.556 "base_bdevs_list": [ 00:16:05.556 { 00:16:05.556 "name": "spare", 00:16:05.556 "uuid": "d2808fd8-0e54-5714-8efc-fbca946661e1", 00:16:05.556 "is_configured": true, 00:16:05.556 "data_offset": 2048, 00:16:05.556 "data_size": 63488 00:16:05.556 }, 00:16:05.556 { 00:16:05.556 "name": null, 00:16:05.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.556 "is_configured": false, 00:16:05.556 "data_offset": 2048, 00:16:05.556 "data_size": 63488 00:16:05.556 }, 00:16:05.556 { 00:16:05.556 "name": "BaseBdev3", 00:16:05.556 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:16:05.556 "is_configured": true, 00:16:05.556 "data_offset": 2048, 00:16:05.556 "data_size": 63488 00:16:05.556 }, 00:16:05.556 { 00:16:05.556 "name": "BaseBdev4", 00:16:05.556 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:16:05.556 "is_configured": true, 00:16:05.556 "data_offset": 2048, 00:16:05.556 "data_size": 63488 00:16:05.556 } 00:16:05.556 ] 00:16:05.556 }' 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.556 20:09:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.556 [2024-12-05 20:09:06.953498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:05.816 [2024-12-05 20:09:07.003070] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:05.816 [2024-12-05 20:09:07.003138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.816 [2024-12-05 20:09:07.003172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:05.816 [2024-12-05 20:09:07.003179] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:05.816 20:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.816 20:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:05.816 20:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.816 20:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.816 20:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.816 20:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.816 20:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:05.816 20:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.817 20:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.817 20:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.817 20:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.817 20:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.817 20:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.817 20:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.817 20:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.817 20:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.817 20:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.817 "name": "raid_bdev1", 00:16:05.817 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:16:05.817 "strip_size_kb": 0, 00:16:05.817 "state": "online", 00:16:05.817 "raid_level": "raid1", 00:16:05.817 "superblock": true, 00:16:05.817 "num_base_bdevs": 4, 00:16:05.817 "num_base_bdevs_discovered": 2, 00:16:05.817 "num_base_bdevs_operational": 2, 00:16:05.817 "base_bdevs_list": [ 00:16:05.817 { 00:16:05.817 "name": null, 00:16:05.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.817 "is_configured": false, 00:16:05.817 "data_offset": 0, 00:16:05.817 "data_size": 63488 00:16:05.817 }, 00:16:05.817 { 00:16:05.817 "name": null, 00:16:05.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.817 "is_configured": false, 00:16:05.817 "data_offset": 2048, 00:16:05.817 "data_size": 63488 00:16:05.817 }, 00:16:05.817 { 00:16:05.817 "name": "BaseBdev3", 00:16:05.817 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:16:05.817 "is_configured": true, 00:16:05.817 "data_offset": 2048, 00:16:05.817 "data_size": 63488 00:16:05.817 }, 00:16:05.817 { 00:16:05.817 "name": "BaseBdev4", 00:16:05.817 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:16:05.817 "is_configured": true, 00:16:05.817 "data_offset": 2048, 00:16:05.817 "data_size": 63488 00:16:05.817 } 00:16:05.817 ] 00:16:05.817 }' 00:16:05.817 20:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.817 20:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.077 20:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:06.077 20:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.077 20:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.077 [2024-12-05 20:09:07.456723] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:06.077 [2024-12-05 20:09:07.456864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.077 [2024-12-05 20:09:07.456930] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:06.077 [2024-12-05 20:09:07.456966] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.077 [2024-12-05 20:09:07.457473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.077 [2024-12-05 20:09:07.457534] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:06.077 [2024-12-05 20:09:07.457664] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:06.077 [2024-12-05 20:09:07.457706] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:06.077 [2024-12-05 20:09:07.457758] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:06.077 [2024-12-05 20:09:07.457839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.077 [2024-12-05 20:09:07.471748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:16:06.077 spare 00:16:06.077 20:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.077 [2024-12-05 20:09:07.473661] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:06.077 20:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.470 "name": "raid_bdev1", 00:16:07.470 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:16:07.470 "strip_size_kb": 0, 00:16:07.470 "state": "online", 00:16:07.470 "raid_level": "raid1", 00:16:07.470 "superblock": true, 00:16:07.470 "num_base_bdevs": 4, 00:16:07.470 "num_base_bdevs_discovered": 3, 00:16:07.470 "num_base_bdevs_operational": 3, 00:16:07.470 "process": { 00:16:07.470 "type": "rebuild", 00:16:07.470 "target": "spare", 00:16:07.470 "progress": { 00:16:07.470 "blocks": 20480, 00:16:07.470 "percent": 32 00:16:07.470 } 00:16:07.470 }, 00:16:07.470 "base_bdevs_list": [ 00:16:07.470 { 00:16:07.470 "name": "spare", 00:16:07.470 "uuid": "d2808fd8-0e54-5714-8efc-fbca946661e1", 00:16:07.470 "is_configured": true, 00:16:07.470 "data_offset": 2048, 00:16:07.470 "data_size": 63488 00:16:07.470 }, 00:16:07.470 { 00:16:07.470 "name": null, 00:16:07.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.470 "is_configured": false, 00:16:07.470 "data_offset": 2048, 00:16:07.470 "data_size": 63488 00:16:07.470 }, 00:16:07.470 { 00:16:07.470 "name": "BaseBdev3", 00:16:07.470 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:16:07.470 "is_configured": true, 00:16:07.470 "data_offset": 2048, 00:16:07.470 "data_size": 63488 00:16:07.470 }, 00:16:07.470 { 00:16:07.470 "name": "BaseBdev4", 00:16:07.470 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:16:07.470 "is_configured": true, 00:16:07.470 "data_offset": 2048, 00:16:07.470 "data_size": 63488 00:16:07.470 } 00:16:07.470 ] 00:16:07.470 }' 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.470 [2024-12-05 20:09:08.613298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.470 [2024-12-05 20:09:08.678897] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:07.470 [2024-12-05 20:09:08.678975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.470 [2024-12-05 20:09:08.678992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.470 [2024-12-05 20:09:08.679000] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.470 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.471 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.471 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.471 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.471 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.471 20:09:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.471 20:09:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.471 20:09:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.471 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.471 "name": "raid_bdev1", 00:16:07.471 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:16:07.471 "strip_size_kb": 0, 00:16:07.471 "state": "online", 00:16:07.471 "raid_level": "raid1", 00:16:07.471 "superblock": true, 00:16:07.471 "num_base_bdevs": 4, 00:16:07.471 "num_base_bdevs_discovered": 2, 00:16:07.471 "num_base_bdevs_operational": 2, 00:16:07.471 "base_bdevs_list": [ 00:16:07.471 { 00:16:07.471 "name": null, 00:16:07.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.471 "is_configured": false, 00:16:07.471 "data_offset": 0, 00:16:07.471 "data_size": 63488 00:16:07.471 }, 00:16:07.471 { 00:16:07.471 "name": null, 00:16:07.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.471 "is_configured": false, 00:16:07.471 "data_offset": 2048, 00:16:07.471 "data_size": 63488 00:16:07.471 }, 00:16:07.471 { 00:16:07.471 "name": "BaseBdev3", 00:16:07.471 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:16:07.471 "is_configured": true, 00:16:07.471 "data_offset": 2048, 00:16:07.471 "data_size": 63488 00:16:07.471 }, 00:16:07.471 { 00:16:07.471 "name": "BaseBdev4", 00:16:07.471 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:16:07.471 "is_configured": true, 00:16:07.471 "data_offset": 2048, 00:16:07.471 "data_size": 63488 00:16:07.471 } 00:16:07.471 ] 00:16:07.471 }' 00:16:07.471 20:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.471 20:09:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.041 "name": "raid_bdev1", 00:16:08.041 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:16:08.041 "strip_size_kb": 0, 00:16:08.041 "state": "online", 00:16:08.041 "raid_level": "raid1", 00:16:08.041 "superblock": true, 00:16:08.041 "num_base_bdevs": 4, 00:16:08.041 "num_base_bdevs_discovered": 2, 00:16:08.041 "num_base_bdevs_operational": 2, 00:16:08.041 "base_bdevs_list": [ 00:16:08.041 { 00:16:08.041 "name": null, 00:16:08.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.041 "is_configured": false, 00:16:08.041 "data_offset": 0, 00:16:08.041 "data_size": 63488 00:16:08.041 }, 00:16:08.041 { 00:16:08.041 "name": null, 00:16:08.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.041 "is_configured": false, 00:16:08.041 "data_offset": 2048, 00:16:08.041 "data_size": 63488 00:16:08.041 }, 00:16:08.041 { 00:16:08.041 "name": "BaseBdev3", 00:16:08.041 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:16:08.041 "is_configured": true, 00:16:08.041 "data_offset": 2048, 00:16:08.041 "data_size": 63488 00:16:08.041 }, 00:16:08.041 { 00:16:08.041 "name": "BaseBdev4", 00:16:08.041 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:16:08.041 "is_configured": true, 00:16:08.041 "data_offset": 2048, 00:16:08.041 "data_size": 63488 00:16:08.041 } 00:16:08.041 ] 00:16:08.041 }' 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.041 20:09:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.041 [2024-12-05 20:09:09.311998] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:08.041 [2024-12-05 20:09:09.312055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.041 [2024-12-05 20:09:09.312075] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:08.041 [2024-12-05 20:09:09.312085] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.042 [2024-12-05 20:09:09.312531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.042 [2024-12-05 20:09:09.312553] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:08.042 [2024-12-05 20:09:09.312646] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:08.042 [2024-12-05 20:09:09.312661] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:08.042 [2024-12-05 20:09:09.312668] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:08.042 [2024-12-05 20:09:09.312692] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:08.042 BaseBdev1 00:16:08.042 20:09:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.042 20:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:08.979 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:08.979 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.979 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.979 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.979 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.979 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:08.979 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.979 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.979 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.979 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.979 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.979 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.979 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.979 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.979 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.979 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.979 "name": "raid_bdev1", 00:16:08.979 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:16:08.979 "strip_size_kb": 0, 00:16:08.979 "state": "online", 00:16:08.979 "raid_level": "raid1", 00:16:08.979 "superblock": true, 00:16:08.979 "num_base_bdevs": 4, 00:16:08.979 "num_base_bdevs_discovered": 2, 00:16:08.979 "num_base_bdevs_operational": 2, 00:16:08.979 "base_bdevs_list": [ 00:16:08.979 { 00:16:08.979 "name": null, 00:16:08.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.979 "is_configured": false, 00:16:08.979 "data_offset": 0, 00:16:08.979 "data_size": 63488 00:16:08.979 }, 00:16:08.979 { 00:16:08.979 "name": null, 00:16:08.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.979 "is_configured": false, 00:16:08.979 "data_offset": 2048, 00:16:08.980 "data_size": 63488 00:16:08.980 }, 00:16:08.980 { 00:16:08.980 "name": "BaseBdev3", 00:16:08.980 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:16:08.980 "is_configured": true, 00:16:08.980 "data_offset": 2048, 00:16:08.980 "data_size": 63488 00:16:08.980 }, 00:16:08.980 { 00:16:08.980 "name": "BaseBdev4", 00:16:08.980 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:16:08.980 "is_configured": true, 00:16:08.980 "data_offset": 2048, 00:16:08.980 "data_size": 63488 00:16:08.980 } 00:16:08.980 ] 00:16:08.980 }' 00:16:08.980 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.980 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.549 "name": "raid_bdev1", 00:16:09.549 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:16:09.549 "strip_size_kb": 0, 00:16:09.549 "state": "online", 00:16:09.549 "raid_level": "raid1", 00:16:09.549 "superblock": true, 00:16:09.549 "num_base_bdevs": 4, 00:16:09.549 "num_base_bdevs_discovered": 2, 00:16:09.549 "num_base_bdevs_operational": 2, 00:16:09.549 "base_bdevs_list": [ 00:16:09.549 { 00:16:09.549 "name": null, 00:16:09.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.549 "is_configured": false, 00:16:09.549 "data_offset": 0, 00:16:09.549 "data_size": 63488 00:16:09.549 }, 00:16:09.549 { 00:16:09.549 "name": null, 00:16:09.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.549 "is_configured": false, 00:16:09.549 "data_offset": 2048, 00:16:09.549 "data_size": 63488 00:16:09.549 }, 00:16:09.549 { 00:16:09.549 "name": "BaseBdev3", 00:16:09.549 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:16:09.549 "is_configured": true, 00:16:09.549 "data_offset": 2048, 00:16:09.549 "data_size": 63488 00:16:09.549 }, 00:16:09.549 { 00:16:09.549 "name": "BaseBdev4", 00:16:09.549 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:16:09.549 "is_configured": true, 00:16:09.549 "data_offset": 2048, 00:16:09.549 "data_size": 63488 00:16:09.549 } 00:16:09.549 ] 00:16:09.549 }' 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.549 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.549 [2024-12-05 20:09:10.977235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.549 [2024-12-05 20:09:10.977493] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:09.549 [2024-12-05 20:09:10.977555] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:09.549 request: 00:16:09.810 { 00:16:09.810 "base_bdev": "BaseBdev1", 00:16:09.810 "raid_bdev": "raid_bdev1", 00:16:09.810 "method": "bdev_raid_add_base_bdev", 00:16:09.810 "req_id": 1 00:16:09.810 } 00:16:09.810 Got JSON-RPC error response 00:16:09.810 response: 00:16:09.810 { 00:16:09.810 "code": -22, 00:16:09.810 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:09.810 } 00:16:09.810 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:09.810 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:09.810 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:09.810 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:09.810 20:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:09.810 20:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:10.750 20:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:10.751 20:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.751 20:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.751 20:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.751 20:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.751 20:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.751 20:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.751 20:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.751 20:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.751 20:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.751 20:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.751 20:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.751 20:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.751 20:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.751 20:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.751 20:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.751 "name": "raid_bdev1", 00:16:10.751 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:16:10.751 "strip_size_kb": 0, 00:16:10.751 "state": "online", 00:16:10.751 "raid_level": "raid1", 00:16:10.751 "superblock": true, 00:16:10.751 "num_base_bdevs": 4, 00:16:10.751 "num_base_bdevs_discovered": 2, 00:16:10.751 "num_base_bdevs_operational": 2, 00:16:10.751 "base_bdevs_list": [ 00:16:10.751 { 00:16:10.751 "name": null, 00:16:10.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.751 "is_configured": false, 00:16:10.751 "data_offset": 0, 00:16:10.751 "data_size": 63488 00:16:10.751 }, 00:16:10.751 { 00:16:10.751 "name": null, 00:16:10.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.751 "is_configured": false, 00:16:10.751 "data_offset": 2048, 00:16:10.751 "data_size": 63488 00:16:10.751 }, 00:16:10.751 { 00:16:10.751 "name": "BaseBdev3", 00:16:10.751 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:16:10.751 "is_configured": true, 00:16:10.751 "data_offset": 2048, 00:16:10.751 "data_size": 63488 00:16:10.751 }, 00:16:10.751 { 00:16:10.751 "name": "BaseBdev4", 00:16:10.751 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:16:10.751 "is_configured": true, 00:16:10.751 "data_offset": 2048, 00:16:10.751 "data_size": 63488 00:16:10.751 } 00:16:10.751 ] 00:16:10.751 }' 00:16:10.751 20:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.751 20:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.321 20:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.321 20:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.321 20:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.321 20:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.321 20:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.321 20:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.321 20:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.321 20:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.321 20:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.321 20:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.321 20:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.321 "name": "raid_bdev1", 00:16:11.321 "uuid": "5f7f77e3-294d-4f6c-9ae9-e3e01bdebce5", 00:16:11.321 "strip_size_kb": 0, 00:16:11.321 "state": "online", 00:16:11.321 "raid_level": "raid1", 00:16:11.321 "superblock": true, 00:16:11.321 "num_base_bdevs": 4, 00:16:11.321 "num_base_bdevs_discovered": 2, 00:16:11.321 "num_base_bdevs_operational": 2, 00:16:11.321 "base_bdevs_list": [ 00:16:11.321 { 00:16:11.321 "name": null, 00:16:11.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.321 "is_configured": false, 00:16:11.321 "data_offset": 0, 00:16:11.321 "data_size": 63488 00:16:11.321 }, 00:16:11.321 { 00:16:11.321 "name": null, 00:16:11.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.321 "is_configured": false, 00:16:11.321 "data_offset": 2048, 00:16:11.321 "data_size": 63488 00:16:11.321 }, 00:16:11.321 { 00:16:11.321 "name": "BaseBdev3", 00:16:11.321 "uuid": "c4dcea35-70eb-5762-ade1-221f32ba04aa", 00:16:11.321 "is_configured": true, 00:16:11.321 "data_offset": 2048, 00:16:11.321 "data_size": 63488 00:16:11.321 }, 00:16:11.321 { 00:16:11.321 "name": "BaseBdev4", 00:16:11.321 "uuid": "1eca9ca2-01df-5953-9789-dadcff4c9b6a", 00:16:11.321 "is_configured": true, 00:16:11.321 "data_offset": 2048, 00:16:11.321 "data_size": 63488 00:16:11.321 } 00:16:11.321 ] 00:16:11.321 }' 00:16:11.321 20:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.321 20:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.322 20:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.322 20:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.322 20:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78093 00:16:11.322 20:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78093 ']' 00:16:11.322 20:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78093 00:16:11.322 20:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:11.322 20:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.322 20:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78093 00:16:11.322 killing process with pid 78093 00:16:11.322 Received shutdown signal, test time was about 60.000000 seconds 00:16:11.322 00:16:11.322 Latency(us) 00:16:11.322 [2024-12-05T20:09:12.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.322 [2024-12-05T20:09:12.759Z] =================================================================================================================== 00:16:11.322 [2024-12-05T20:09:12.759Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:11.322 20:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:11.322 20:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:11.322 20:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78093' 00:16:11.322 20:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78093 00:16:11.322 [2024-12-05 20:09:12.650316] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.322 [2024-12-05 20:09:12.650444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.322 20:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78093 00:16:11.322 [2024-12-05 20:09:12.650522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.322 [2024-12-05 20:09:12.650532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:11.891 [2024-12-05 20:09:13.150942] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:13.271 20:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:13.271 00:16:13.271 real 0m24.912s 00:16:13.271 user 0m30.367s 00:16:13.271 sys 0m3.562s 00:16:13.271 ************************************ 00:16:13.271 END TEST raid_rebuild_test_sb 00:16:13.271 ************************************ 00:16:13.271 20:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.271 20:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.271 20:09:14 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:16:13.271 20:09:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:13.271 20:09:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.271 20:09:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:13.271 ************************************ 00:16:13.271 START TEST raid_rebuild_test_io 00:16:13.271 ************************************ 00:16:13.271 20:09:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:16:13.271 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78844 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78844 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78844 ']' 00:16:13.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.272 20:09:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.272 [2024-12-05 20:09:14.445438] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:16:13.272 [2024-12-05 20:09:14.445635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:13.272 Zero copy mechanism will not be used. 00:16:13.272 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78844 ] 00:16:13.272 [2024-12-05 20:09:14.616446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.531 [2024-12-05 20:09:14.726268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.531 [2024-12-05 20:09:14.921260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.531 [2024-12-05 20:09:14.921371] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.100 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.100 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:16:14.100 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:14.100 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.101 BaseBdev1_malloc 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.101 [2024-12-05 20:09:15.310851] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:14.101 [2024-12-05 20:09:15.310965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.101 [2024-12-05 20:09:15.310994] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:14.101 [2024-12-05 20:09:15.311006] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.101 [2024-12-05 20:09:15.313105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.101 [2024-12-05 20:09:15.313145] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:14.101 BaseBdev1 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.101 BaseBdev2_malloc 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.101 [2024-12-05 20:09:15.365511] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:14.101 [2024-12-05 20:09:15.365625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.101 [2024-12-05 20:09:15.365698] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:14.101 [2024-12-05 20:09:15.365736] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.101 [2024-12-05 20:09:15.367869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.101 [2024-12-05 20:09:15.367952] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:14.101 BaseBdev2 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.101 BaseBdev3_malloc 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.101 [2024-12-05 20:09:15.433582] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:14.101 [2024-12-05 20:09:15.433639] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.101 [2024-12-05 20:09:15.433663] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:14.101 [2024-12-05 20:09:15.433675] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.101 [2024-12-05 20:09:15.435738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.101 [2024-12-05 20:09:15.435777] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:14.101 BaseBdev3 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.101 BaseBdev4_malloc 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.101 [2024-12-05 20:09:15.487073] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:14.101 [2024-12-05 20:09:15.487131] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.101 [2024-12-05 20:09:15.487154] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:14.101 [2024-12-05 20:09:15.487165] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.101 [2024-12-05 20:09:15.489259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.101 [2024-12-05 20:09:15.489301] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:14.101 BaseBdev4 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.101 spare_malloc 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.101 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.359 spare_delay 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.359 [2024-12-05 20:09:15.552079] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:14.359 [2024-12-05 20:09:15.552130] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.359 [2024-12-05 20:09:15.552149] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:14.359 [2024-12-05 20:09:15.552159] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.359 [2024-12-05 20:09:15.554218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.359 [2024-12-05 20:09:15.554259] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:14.359 spare 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.359 [2024-12-05 20:09:15.564087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.359 [2024-12-05 20:09:15.566006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:14.359 [2024-12-05 20:09:15.566065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:14.359 [2024-12-05 20:09:15.566114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:14.359 [2024-12-05 20:09:15.566188] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:14.359 [2024-12-05 20:09:15.566200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:14.359 [2024-12-05 20:09:15.566433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:14.359 [2024-12-05 20:09:15.566595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:14.359 [2024-12-05 20:09:15.566607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:14.359 [2024-12-05 20:09:15.566743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.359 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.359 "name": "raid_bdev1", 00:16:14.359 "uuid": "97486722-4edd-4837-b724-b160f96d6983", 00:16:14.359 "strip_size_kb": 0, 00:16:14.359 "state": "online", 00:16:14.359 "raid_level": "raid1", 00:16:14.359 "superblock": false, 00:16:14.359 "num_base_bdevs": 4, 00:16:14.359 "num_base_bdevs_discovered": 4, 00:16:14.359 "num_base_bdevs_operational": 4, 00:16:14.359 "base_bdevs_list": [ 00:16:14.359 { 00:16:14.359 "name": "BaseBdev1", 00:16:14.359 "uuid": "30b0a113-3c20-5374-986c-7ed9c53e2e04", 00:16:14.359 "is_configured": true, 00:16:14.359 "data_offset": 0, 00:16:14.359 "data_size": 65536 00:16:14.359 }, 00:16:14.359 { 00:16:14.359 "name": "BaseBdev2", 00:16:14.359 "uuid": "fe0cdde8-9ca5-5733-8dc4-d168d4edcbde", 00:16:14.359 "is_configured": true, 00:16:14.359 "data_offset": 0, 00:16:14.359 "data_size": 65536 00:16:14.359 }, 00:16:14.359 { 00:16:14.359 "name": "BaseBdev3", 00:16:14.359 "uuid": "3dd3a23a-f9e7-56d2-bd42-66024c9406c2", 00:16:14.359 "is_configured": true, 00:16:14.359 "data_offset": 0, 00:16:14.359 "data_size": 65536 00:16:14.359 }, 00:16:14.359 { 00:16:14.359 "name": "BaseBdev4", 00:16:14.359 "uuid": "45d51664-c43f-5fc2-b02d-b28f048650de", 00:16:14.359 "is_configured": true, 00:16:14.359 "data_offset": 0, 00:16:14.359 "data_size": 65536 00:16:14.359 } 00:16:14.359 ] 00:16:14.359 }' 00:16:14.360 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.360 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.618 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:14.618 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:14.618 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.618 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.618 [2024-12-05 20:09:16.035674] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.878 [2024-12-05 20:09:16.147078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.878 "name": "raid_bdev1", 00:16:14.878 "uuid": "97486722-4edd-4837-b724-b160f96d6983", 00:16:14.878 "strip_size_kb": 0, 00:16:14.878 "state": "online", 00:16:14.878 "raid_level": "raid1", 00:16:14.878 "superblock": false, 00:16:14.878 "num_base_bdevs": 4, 00:16:14.878 "num_base_bdevs_discovered": 3, 00:16:14.878 "num_base_bdevs_operational": 3, 00:16:14.878 "base_bdevs_list": [ 00:16:14.878 { 00:16:14.878 "name": null, 00:16:14.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.878 "is_configured": false, 00:16:14.878 "data_offset": 0, 00:16:14.878 "data_size": 65536 00:16:14.878 }, 00:16:14.878 { 00:16:14.878 "name": "BaseBdev2", 00:16:14.878 "uuid": "fe0cdde8-9ca5-5733-8dc4-d168d4edcbde", 00:16:14.878 "is_configured": true, 00:16:14.878 "data_offset": 0, 00:16:14.878 "data_size": 65536 00:16:14.878 }, 00:16:14.878 { 00:16:14.878 "name": "BaseBdev3", 00:16:14.878 "uuid": "3dd3a23a-f9e7-56d2-bd42-66024c9406c2", 00:16:14.878 "is_configured": true, 00:16:14.878 "data_offset": 0, 00:16:14.878 "data_size": 65536 00:16:14.878 }, 00:16:14.878 { 00:16:14.878 "name": "BaseBdev4", 00:16:14.878 "uuid": "45d51664-c43f-5fc2-b02d-b28f048650de", 00:16:14.878 "is_configured": true, 00:16:14.878 "data_offset": 0, 00:16:14.878 "data_size": 65536 00:16:14.878 } 00:16:14.878 ] 00:16:14.878 }' 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.878 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.878 [2024-12-05 20:09:16.247266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:14.878 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:14.878 Zero copy mechanism will not be used. 00:16:14.878 Running I/O for 60 seconds... 00:16:15.138 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:15.138 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.138 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.138 [2024-12-05 20:09:16.560327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:15.397 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.397 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:15.397 [2024-12-05 20:09:16.630931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:15.397 [2024-12-05 20:09:16.632985] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:15.397 [2024-12-05 20:09:16.740662] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:15.397 [2024-12-05 20:09:16.742253] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:15.657 [2024-12-05 20:09:16.960272] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:15.657 [2024-12-05 20:09:16.961147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:15.917 136.00 IOPS, 408.00 MiB/s [2024-12-05T20:09:17.354Z] [2024-12-05 20:09:17.293408] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:16.177 [2024-12-05 20:09:17.505681] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:16.177 [2024-12-05 20:09:17.506161] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:16.437 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.437 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.437 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.437 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.437 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.437 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.437 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.437 20:09:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.437 20:09:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.437 20:09:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.437 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.437 "name": "raid_bdev1", 00:16:16.437 "uuid": "97486722-4edd-4837-b724-b160f96d6983", 00:16:16.437 "strip_size_kb": 0, 00:16:16.437 "state": "online", 00:16:16.437 "raid_level": "raid1", 00:16:16.437 "superblock": false, 00:16:16.437 "num_base_bdevs": 4, 00:16:16.437 "num_base_bdevs_discovered": 4, 00:16:16.437 "num_base_bdevs_operational": 4, 00:16:16.437 "process": { 00:16:16.437 "type": "rebuild", 00:16:16.437 "target": "spare", 00:16:16.437 "progress": { 00:16:16.437 "blocks": 10240, 00:16:16.437 "percent": 15 00:16:16.437 } 00:16:16.437 }, 00:16:16.437 "base_bdevs_list": [ 00:16:16.437 { 00:16:16.437 "name": "spare", 00:16:16.437 "uuid": "cef9e0ba-a4ec-5330-8d16-0b7e72502168", 00:16:16.437 "is_configured": true, 00:16:16.437 "data_offset": 0, 00:16:16.437 "data_size": 65536 00:16:16.437 }, 00:16:16.437 { 00:16:16.437 "name": "BaseBdev2", 00:16:16.437 "uuid": "fe0cdde8-9ca5-5733-8dc4-d168d4edcbde", 00:16:16.437 "is_configured": true, 00:16:16.437 "data_offset": 0, 00:16:16.437 "data_size": 65536 00:16:16.437 }, 00:16:16.437 { 00:16:16.437 "name": "BaseBdev3", 00:16:16.437 "uuid": "3dd3a23a-f9e7-56d2-bd42-66024c9406c2", 00:16:16.437 "is_configured": true, 00:16:16.437 "data_offset": 0, 00:16:16.438 "data_size": 65536 00:16:16.438 }, 00:16:16.438 { 00:16:16.438 "name": "BaseBdev4", 00:16:16.438 "uuid": "45d51664-c43f-5fc2-b02d-b28f048650de", 00:16:16.438 "is_configured": true, 00:16:16.438 "data_offset": 0, 00:16:16.438 "data_size": 65536 00:16:16.438 } 00:16:16.438 ] 00:16:16.438 }' 00:16:16.438 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.438 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.438 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.438 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.438 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:16.438 20:09:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.438 20:09:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.438 [2024-12-05 20:09:17.785197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:16.438 [2024-12-05 20:09:17.827978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:16.438 [2024-12-05 20:09:17.828509] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:16.438 [2024-12-05 20:09:17.829587] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:16.438 [2024-12-05 20:09:17.838498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.438 [2024-12-05 20:09:17.838545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:16.438 [2024-12-05 20:09:17.838558] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:16.706 [2024-12-05 20:09:17.874661] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.706 "name": "raid_bdev1", 00:16:16.706 "uuid": "97486722-4edd-4837-b724-b160f96d6983", 00:16:16.706 "strip_size_kb": 0, 00:16:16.706 "state": "online", 00:16:16.706 "raid_level": "raid1", 00:16:16.706 "superblock": false, 00:16:16.706 "num_base_bdevs": 4, 00:16:16.706 "num_base_bdevs_discovered": 3, 00:16:16.706 "num_base_bdevs_operational": 3, 00:16:16.706 "base_bdevs_list": [ 00:16:16.706 { 00:16:16.706 "name": null, 00:16:16.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.706 "is_configured": false, 00:16:16.706 "data_offset": 0, 00:16:16.706 "data_size": 65536 00:16:16.706 }, 00:16:16.706 { 00:16:16.706 "name": "BaseBdev2", 00:16:16.706 "uuid": "fe0cdde8-9ca5-5733-8dc4-d168d4edcbde", 00:16:16.706 "is_configured": true, 00:16:16.706 "data_offset": 0, 00:16:16.706 "data_size": 65536 00:16:16.706 }, 00:16:16.706 { 00:16:16.706 "name": "BaseBdev3", 00:16:16.706 "uuid": "3dd3a23a-f9e7-56d2-bd42-66024c9406c2", 00:16:16.706 "is_configured": true, 00:16:16.706 "data_offset": 0, 00:16:16.706 "data_size": 65536 00:16:16.706 }, 00:16:16.706 { 00:16:16.706 "name": "BaseBdev4", 00:16:16.706 "uuid": "45d51664-c43f-5fc2-b02d-b28f048650de", 00:16:16.706 "is_configured": true, 00:16:16.706 "data_offset": 0, 00:16:16.706 "data_size": 65536 00:16:16.706 } 00:16:16.706 ] 00:16:16.706 }' 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.706 20:09:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.985 152.00 IOPS, 456.00 MiB/s [2024-12-05T20:09:18.422Z] 20:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:16.985 20:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.985 20:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:16.985 20:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:16.985 20:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.985 20:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.985 20:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.985 20:09:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.985 20:09:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.985 20:09:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.985 20:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.985 "name": "raid_bdev1", 00:16:16.985 "uuid": "97486722-4edd-4837-b724-b160f96d6983", 00:16:16.985 "strip_size_kb": 0, 00:16:16.985 "state": "online", 00:16:16.985 "raid_level": "raid1", 00:16:16.985 "superblock": false, 00:16:16.985 "num_base_bdevs": 4, 00:16:16.985 "num_base_bdevs_discovered": 3, 00:16:16.985 "num_base_bdevs_operational": 3, 00:16:16.985 "base_bdevs_list": [ 00:16:16.985 { 00:16:16.985 "name": null, 00:16:16.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.985 "is_configured": false, 00:16:16.985 "data_offset": 0, 00:16:16.985 "data_size": 65536 00:16:16.985 }, 00:16:16.985 { 00:16:16.985 "name": "BaseBdev2", 00:16:16.985 "uuid": "fe0cdde8-9ca5-5733-8dc4-d168d4edcbde", 00:16:16.985 "is_configured": true, 00:16:16.985 "data_offset": 0, 00:16:16.985 "data_size": 65536 00:16:16.985 }, 00:16:16.985 { 00:16:16.985 "name": "BaseBdev3", 00:16:16.985 "uuid": "3dd3a23a-f9e7-56d2-bd42-66024c9406c2", 00:16:16.985 "is_configured": true, 00:16:16.985 "data_offset": 0, 00:16:16.985 "data_size": 65536 00:16:16.985 }, 00:16:16.985 { 00:16:16.985 "name": "BaseBdev4", 00:16:16.985 "uuid": "45d51664-c43f-5fc2-b02d-b28f048650de", 00:16:16.985 "is_configured": true, 00:16:16.985 "data_offset": 0, 00:16:16.986 "data_size": 65536 00:16:16.986 } 00:16:16.986 ] 00:16:16.986 }' 00:16:16.986 20:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.264 20:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:17.264 20:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.264 20:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:17.264 20:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:17.264 20:09:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.264 20:09:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.264 [2024-12-05 20:09:18.492665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:17.264 20:09:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.264 20:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:17.264 [2024-12-05 20:09:18.553023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:17.264 [2024-12-05 20:09:18.555014] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:17.264 [2024-12-05 20:09:18.670891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:17.264 [2024-12-05 20:09:18.672373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:17.522 [2024-12-05 20:09:18.908778] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:17.522 [2024-12-05 20:09:18.909486] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:18.094 158.67 IOPS, 476.00 MiB/s [2024-12-05T20:09:19.531Z] [2024-12-05 20:09:19.294666] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.358 "name": "raid_bdev1", 00:16:18.358 "uuid": "97486722-4edd-4837-b724-b160f96d6983", 00:16:18.358 "strip_size_kb": 0, 00:16:18.358 "state": "online", 00:16:18.358 "raid_level": "raid1", 00:16:18.358 "superblock": false, 00:16:18.358 "num_base_bdevs": 4, 00:16:18.358 "num_base_bdevs_discovered": 4, 00:16:18.358 "num_base_bdevs_operational": 4, 00:16:18.358 "process": { 00:16:18.358 "type": "rebuild", 00:16:18.358 "target": "spare", 00:16:18.358 "progress": { 00:16:18.358 "blocks": 10240, 00:16:18.358 "percent": 15 00:16:18.358 } 00:16:18.358 }, 00:16:18.358 "base_bdevs_list": [ 00:16:18.358 { 00:16:18.358 "name": "spare", 00:16:18.358 "uuid": "cef9e0ba-a4ec-5330-8d16-0b7e72502168", 00:16:18.358 "is_configured": true, 00:16:18.358 "data_offset": 0, 00:16:18.358 "data_size": 65536 00:16:18.358 }, 00:16:18.358 { 00:16:18.358 "name": "BaseBdev2", 00:16:18.358 "uuid": "fe0cdde8-9ca5-5733-8dc4-d168d4edcbde", 00:16:18.358 "is_configured": true, 00:16:18.358 "data_offset": 0, 00:16:18.358 "data_size": 65536 00:16:18.358 }, 00:16:18.358 { 00:16:18.358 "name": "BaseBdev3", 00:16:18.358 "uuid": "3dd3a23a-f9e7-56d2-bd42-66024c9406c2", 00:16:18.358 "is_configured": true, 00:16:18.358 "data_offset": 0, 00:16:18.358 "data_size": 65536 00:16:18.358 }, 00:16:18.358 { 00:16:18.358 "name": "BaseBdev4", 00:16:18.358 "uuid": "45d51664-c43f-5fc2-b02d-b28f048650de", 00:16:18.358 "is_configured": true, 00:16:18.358 "data_offset": 0, 00:16:18.358 "data_size": 65536 00:16:18.358 } 00:16:18.358 ] 00:16:18.358 }' 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.358 [2024-12-05 20:09:19.676785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:18.358 [2024-12-05 20:09:19.752476] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:18.358 [2024-12-05 20:09:19.752630] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:18.358 [2024-12-05 20:09:19.754658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.358 20:09:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.617 "name": "raid_bdev1", 00:16:18.617 "uuid": "97486722-4edd-4837-b724-b160f96d6983", 00:16:18.617 "strip_size_kb": 0, 00:16:18.617 "state": "online", 00:16:18.617 "raid_level": "raid1", 00:16:18.617 "superblock": false, 00:16:18.617 "num_base_bdevs": 4, 00:16:18.617 "num_base_bdevs_discovered": 3, 00:16:18.617 "num_base_bdevs_operational": 3, 00:16:18.617 "process": { 00:16:18.617 "type": "rebuild", 00:16:18.617 "target": "spare", 00:16:18.617 "progress": { 00:16:18.617 "blocks": 14336, 00:16:18.617 "percent": 21 00:16:18.617 } 00:16:18.617 }, 00:16:18.617 "base_bdevs_list": [ 00:16:18.617 { 00:16:18.617 "name": "spare", 00:16:18.617 "uuid": "cef9e0ba-a4ec-5330-8d16-0b7e72502168", 00:16:18.617 "is_configured": true, 00:16:18.617 "data_offset": 0, 00:16:18.617 "data_size": 65536 00:16:18.617 }, 00:16:18.617 { 00:16:18.617 "name": null, 00:16:18.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.617 "is_configured": false, 00:16:18.617 "data_offset": 0, 00:16:18.617 "data_size": 65536 00:16:18.617 }, 00:16:18.617 { 00:16:18.617 "name": "BaseBdev3", 00:16:18.617 "uuid": "3dd3a23a-f9e7-56d2-bd42-66024c9406c2", 00:16:18.617 "is_configured": true, 00:16:18.617 "data_offset": 0, 00:16:18.617 "data_size": 65536 00:16:18.617 }, 00:16:18.617 { 00:16:18.617 "name": "BaseBdev4", 00:16:18.617 "uuid": "45d51664-c43f-5fc2-b02d-b28f048650de", 00:16:18.617 "is_configured": true, 00:16:18.617 "data_offset": 0, 00:16:18.617 "data_size": 65536 00:16:18.617 } 00:16:18.617 ] 00:16:18.617 }' 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.617 [2024-12-05 20:09:19.860984] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:18.617 [2024-12-05 20:09:19.861321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=481 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.617 "name": "raid_bdev1", 00:16:18.617 "uuid": "97486722-4edd-4837-b724-b160f96d6983", 00:16:18.617 "strip_size_kb": 0, 00:16:18.617 "state": "online", 00:16:18.617 "raid_level": "raid1", 00:16:18.617 "superblock": false, 00:16:18.617 "num_base_bdevs": 4, 00:16:18.617 "num_base_bdevs_discovered": 3, 00:16:18.617 "num_base_bdevs_operational": 3, 00:16:18.617 "process": { 00:16:18.617 "type": "rebuild", 00:16:18.617 "target": "spare", 00:16:18.617 "progress": { 00:16:18.617 "blocks": 16384, 00:16:18.617 "percent": 25 00:16:18.617 } 00:16:18.617 }, 00:16:18.617 "base_bdevs_list": [ 00:16:18.617 { 00:16:18.617 "name": "spare", 00:16:18.617 "uuid": "cef9e0ba-a4ec-5330-8d16-0b7e72502168", 00:16:18.617 "is_configured": true, 00:16:18.617 "data_offset": 0, 00:16:18.617 "data_size": 65536 00:16:18.617 }, 00:16:18.617 { 00:16:18.617 "name": null, 00:16:18.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.617 "is_configured": false, 00:16:18.617 "data_offset": 0, 00:16:18.617 "data_size": 65536 00:16:18.617 }, 00:16:18.617 { 00:16:18.617 "name": "BaseBdev3", 00:16:18.617 "uuid": "3dd3a23a-f9e7-56d2-bd42-66024c9406c2", 00:16:18.617 "is_configured": true, 00:16:18.617 "data_offset": 0, 00:16:18.617 "data_size": 65536 00:16:18.617 }, 00:16:18.617 { 00:16:18.617 "name": "BaseBdev4", 00:16:18.617 "uuid": "45d51664-c43f-5fc2-b02d-b28f048650de", 00:16:18.617 "is_configured": true, 00:16:18.617 "data_offset": 0, 00:16:18.617 "data_size": 65536 00:16:18.617 } 00:16:18.617 ] 00:16:18.617 }' 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.617 20:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.617 20:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.617 20:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:19.133 133.75 IOPS, 401.25 MiB/s [2024-12-05T20:09:20.570Z] [2024-12-05 20:09:20.328009] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:19.392 [2024-12-05 20:09:20.677315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:19.651 [2024-12-05 20:09:20.901144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:19.651 [2024-12-05 20:09:21.031525] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:19.651 20:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.651 20:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.651 20:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.651 20:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.651 20:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.651 20:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.651 20:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.651 20:09:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.651 20:09:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.651 20:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.651 20:09:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.909 20:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.909 "name": "raid_bdev1", 00:16:19.909 "uuid": "97486722-4edd-4837-b724-b160f96d6983", 00:16:19.909 "strip_size_kb": 0, 00:16:19.909 "state": "online", 00:16:19.909 "raid_level": "raid1", 00:16:19.909 "superblock": false, 00:16:19.909 "num_base_bdevs": 4, 00:16:19.909 "num_base_bdevs_discovered": 3, 00:16:19.909 "num_base_bdevs_operational": 3, 00:16:19.909 "process": { 00:16:19.909 "type": "rebuild", 00:16:19.910 "target": "spare", 00:16:19.910 "progress": { 00:16:19.910 "blocks": 34816, 00:16:19.910 "percent": 53 00:16:19.910 } 00:16:19.910 }, 00:16:19.910 "base_bdevs_list": [ 00:16:19.910 { 00:16:19.910 "name": "spare", 00:16:19.910 "uuid": "cef9e0ba-a4ec-5330-8d16-0b7e72502168", 00:16:19.910 "is_configured": true, 00:16:19.910 "data_offset": 0, 00:16:19.910 "data_size": 65536 00:16:19.910 }, 00:16:19.910 { 00:16:19.910 "name": null, 00:16:19.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.910 "is_configured": false, 00:16:19.910 "data_offset": 0, 00:16:19.910 "data_size": 65536 00:16:19.910 }, 00:16:19.910 { 00:16:19.910 "name": "BaseBdev3", 00:16:19.910 "uuid": "3dd3a23a-f9e7-56d2-bd42-66024c9406c2", 00:16:19.910 "is_configured": true, 00:16:19.910 "data_offset": 0, 00:16:19.910 "data_size": 65536 00:16:19.910 }, 00:16:19.910 { 00:16:19.910 "name": "BaseBdev4", 00:16:19.910 "uuid": "45d51664-c43f-5fc2-b02d-b28f048650de", 00:16:19.910 "is_configured": true, 00:16:19.910 "data_offset": 0, 00:16:19.910 "data_size": 65536 00:16:19.910 } 00:16:19.910 ] 00:16:19.910 }' 00:16:19.910 20:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.910 20:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.910 20:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.910 20:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.910 20:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:19.910 118.60 IOPS, 355.80 MiB/s [2024-12-05T20:09:21.347Z] [2024-12-05 20:09:21.250183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:20.167 [2024-12-05 20:09:21.371502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:20.167 [2024-12-05 20:09:21.372085] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:20.425 [2024-12-05 20:09:21.718394] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:20.993 20:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.993 20:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.993 20:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.993 20:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.993 20:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.993 20:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.993 20:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.993 20:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.993 20:09:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.993 20:09:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.993 20:09:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.993 106.50 IOPS, 319.50 MiB/s [2024-12-05T20:09:22.430Z] 20:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.993 "name": "raid_bdev1", 00:16:20.993 "uuid": "97486722-4edd-4837-b724-b160f96d6983", 00:16:20.993 "strip_size_kb": 0, 00:16:20.993 "state": "online", 00:16:20.993 "raid_level": "raid1", 00:16:20.993 "superblock": false, 00:16:20.993 "num_base_bdevs": 4, 00:16:20.993 "num_base_bdevs_discovered": 3, 00:16:20.993 "num_base_bdevs_operational": 3, 00:16:20.993 "process": { 00:16:20.993 "type": "rebuild", 00:16:20.993 "target": "spare", 00:16:20.993 "progress": { 00:16:20.993 "blocks": 51200, 00:16:20.993 "percent": 78 00:16:20.993 } 00:16:20.993 }, 00:16:20.993 "base_bdevs_list": [ 00:16:20.993 { 00:16:20.993 "name": "spare", 00:16:20.993 "uuid": "cef9e0ba-a4ec-5330-8d16-0b7e72502168", 00:16:20.993 "is_configured": true, 00:16:20.993 "data_offset": 0, 00:16:20.993 "data_size": 65536 00:16:20.993 }, 00:16:20.993 { 00:16:20.993 "name": null, 00:16:20.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.993 "is_configured": false, 00:16:20.993 "data_offset": 0, 00:16:20.993 "data_size": 65536 00:16:20.993 }, 00:16:20.993 { 00:16:20.993 "name": "BaseBdev3", 00:16:20.993 "uuid": "3dd3a23a-f9e7-56d2-bd42-66024c9406c2", 00:16:20.993 "is_configured": true, 00:16:20.993 "data_offset": 0, 00:16:20.993 "data_size": 65536 00:16:20.993 }, 00:16:20.993 { 00:16:20.993 "name": "BaseBdev4", 00:16:20.994 "uuid": "45d51664-c43f-5fc2-b02d-b28f048650de", 00:16:20.994 "is_configured": true, 00:16:20.994 "data_offset": 0, 00:16:20.994 "data_size": 65536 00:16:20.994 } 00:16:20.994 ] 00:16:20.994 }' 00:16:20.994 20:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.994 20:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.994 20:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.994 20:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.994 20:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.562 [2024-12-05 20:09:22.943315] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:21.821 [2024-12-05 20:09:23.048673] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:21.821 [2024-12-05 20:09:23.051771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.080 97.00 IOPS, 291.00 MiB/s [2024-12-05T20:09:23.517Z] 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.080 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.080 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.080 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.080 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.080 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.080 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.080 20:09:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.080 20:09:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.080 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.080 20:09:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.080 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.080 "name": "raid_bdev1", 00:16:22.080 "uuid": "97486722-4edd-4837-b724-b160f96d6983", 00:16:22.080 "strip_size_kb": 0, 00:16:22.080 "state": "online", 00:16:22.080 "raid_level": "raid1", 00:16:22.080 "superblock": false, 00:16:22.080 "num_base_bdevs": 4, 00:16:22.080 "num_base_bdevs_discovered": 3, 00:16:22.080 "num_base_bdevs_operational": 3, 00:16:22.080 "base_bdevs_list": [ 00:16:22.080 { 00:16:22.080 "name": "spare", 00:16:22.080 "uuid": "cef9e0ba-a4ec-5330-8d16-0b7e72502168", 00:16:22.080 "is_configured": true, 00:16:22.080 "data_offset": 0, 00:16:22.080 "data_size": 65536 00:16:22.080 }, 00:16:22.080 { 00:16:22.080 "name": null, 00:16:22.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.080 "is_configured": false, 00:16:22.080 "data_offset": 0, 00:16:22.080 "data_size": 65536 00:16:22.080 }, 00:16:22.080 { 00:16:22.080 "name": "BaseBdev3", 00:16:22.080 "uuid": "3dd3a23a-f9e7-56d2-bd42-66024c9406c2", 00:16:22.080 "is_configured": true, 00:16:22.080 "data_offset": 0, 00:16:22.080 "data_size": 65536 00:16:22.080 }, 00:16:22.080 { 00:16:22.080 "name": "BaseBdev4", 00:16:22.080 "uuid": "45d51664-c43f-5fc2-b02d-b28f048650de", 00:16:22.080 "is_configured": true, 00:16:22.080 "data_offset": 0, 00:16:22.080 "data_size": 65536 00:16:22.080 } 00:16:22.080 ] 00:16:22.080 }' 00:16:22.080 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.080 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:22.080 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.339 "name": "raid_bdev1", 00:16:22.339 "uuid": "97486722-4edd-4837-b724-b160f96d6983", 00:16:22.339 "strip_size_kb": 0, 00:16:22.339 "state": "online", 00:16:22.339 "raid_level": "raid1", 00:16:22.339 "superblock": false, 00:16:22.339 "num_base_bdevs": 4, 00:16:22.339 "num_base_bdevs_discovered": 3, 00:16:22.339 "num_base_bdevs_operational": 3, 00:16:22.339 "base_bdevs_list": [ 00:16:22.339 { 00:16:22.339 "name": "spare", 00:16:22.339 "uuid": "cef9e0ba-a4ec-5330-8d16-0b7e72502168", 00:16:22.339 "is_configured": true, 00:16:22.339 "data_offset": 0, 00:16:22.339 "data_size": 65536 00:16:22.339 }, 00:16:22.339 { 00:16:22.339 "name": null, 00:16:22.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.339 "is_configured": false, 00:16:22.339 "data_offset": 0, 00:16:22.339 "data_size": 65536 00:16:22.339 }, 00:16:22.339 { 00:16:22.339 "name": "BaseBdev3", 00:16:22.339 "uuid": "3dd3a23a-f9e7-56d2-bd42-66024c9406c2", 00:16:22.339 "is_configured": true, 00:16:22.339 "data_offset": 0, 00:16:22.339 "data_size": 65536 00:16:22.339 }, 00:16:22.339 { 00:16:22.339 "name": "BaseBdev4", 00:16:22.339 "uuid": "45d51664-c43f-5fc2-b02d-b28f048650de", 00:16:22.339 "is_configured": true, 00:16:22.339 "data_offset": 0, 00:16:22.339 "data_size": 65536 00:16:22.339 } 00:16:22.339 ] 00:16:22.339 }' 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.339 "name": "raid_bdev1", 00:16:22.339 "uuid": "97486722-4edd-4837-b724-b160f96d6983", 00:16:22.339 "strip_size_kb": 0, 00:16:22.339 "state": "online", 00:16:22.339 "raid_level": "raid1", 00:16:22.339 "superblock": false, 00:16:22.339 "num_base_bdevs": 4, 00:16:22.339 "num_base_bdevs_discovered": 3, 00:16:22.339 "num_base_bdevs_operational": 3, 00:16:22.339 "base_bdevs_list": [ 00:16:22.339 { 00:16:22.339 "name": "spare", 00:16:22.339 "uuid": "cef9e0ba-a4ec-5330-8d16-0b7e72502168", 00:16:22.339 "is_configured": true, 00:16:22.339 "data_offset": 0, 00:16:22.339 "data_size": 65536 00:16:22.339 }, 00:16:22.339 { 00:16:22.339 "name": null, 00:16:22.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.339 "is_configured": false, 00:16:22.339 "data_offset": 0, 00:16:22.339 "data_size": 65536 00:16:22.339 }, 00:16:22.339 { 00:16:22.339 "name": "BaseBdev3", 00:16:22.339 "uuid": "3dd3a23a-f9e7-56d2-bd42-66024c9406c2", 00:16:22.339 "is_configured": true, 00:16:22.339 "data_offset": 0, 00:16:22.339 "data_size": 65536 00:16:22.339 }, 00:16:22.339 { 00:16:22.339 "name": "BaseBdev4", 00:16:22.339 "uuid": "45d51664-c43f-5fc2-b02d-b28f048650de", 00:16:22.339 "is_configured": true, 00:16:22.339 "data_offset": 0, 00:16:22.339 "data_size": 65536 00:16:22.339 } 00:16:22.339 ] 00:16:22.339 }' 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.339 20:09:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.907 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:22.907 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.907 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.907 [2024-12-05 20:09:24.117019] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:22.907 [2024-12-05 20:09:24.117092] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.907 00:16:22.907 Latency(us) 00:16:22.907 [2024-12-05T20:09:24.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.907 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:22.907 raid_bdev1 : 7.90 90.14 270.43 0.00 0.00 15808.65 323.74 118136.51 00:16:22.907 [2024-12-05T20:09:24.344Z] =================================================================================================================== 00:16:22.907 [2024-12-05T20:09:24.344Z] Total : 90.14 270.43 0.00 0.00 15808.65 323.74 118136.51 00:16:22.907 [2024-12-05 20:09:24.154689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.907 [2024-12-05 20:09:24.154793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.907 [2024-12-05 20:09:24.154924] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.907 [2024-12-05 20:09:24.154976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:22.907 { 00:16:22.907 "results": [ 00:16:22.907 { 00:16:22.907 "job": "raid_bdev1", 00:16:22.907 "core_mask": "0x1", 00:16:22.907 "workload": "randrw", 00:16:22.907 "percentage": 50, 00:16:22.907 "status": "finished", 00:16:22.907 "queue_depth": 2, 00:16:22.907 "io_size": 3145728, 00:16:22.907 "runtime": 7.898633, 00:16:22.907 "iops": 90.14218029879348, 00:16:22.907 "mibps": 270.4265408963804, 00:16:22.907 "io_failed": 0, 00:16:22.907 "io_timeout": 0, 00:16:22.907 "avg_latency_us": 15808.646248957362, 00:16:22.907 "min_latency_us": 323.74497816593885, 00:16:22.907 "max_latency_us": 118136.51004366812 00:16:22.907 } 00:16:22.907 ], 00:16:22.907 "core_count": 1 00:16:22.907 } 00:16:22.907 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.907 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.907 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:22.907 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.907 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.907 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.907 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:22.907 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:22.908 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:22.908 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:22.908 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:22.908 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:22.908 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:22.908 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:22.908 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:22.908 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:22.908 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:22.908 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:22.908 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:23.167 /dev/nbd0 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:23.167 1+0 records in 00:16:23.167 1+0 records out 00:16:23.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510294 s, 8.0 MB/s 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:23.167 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:23.426 /dev/nbd1 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:23.426 1+0 records in 00:16:23.426 1+0 records out 00:16:23.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567666 s, 7.2 MB/s 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:23.426 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:23.685 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:23.685 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:23.685 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:23.685 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:23.685 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:23.685 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:23.685 20:09:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:23.685 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:23.942 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:23.943 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:23.943 /dev/nbd1 00:16:23.943 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:23.943 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:23.943 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:23.943 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:23.943 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:23.943 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:23.943 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:23.943 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:23.943 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:23.943 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:23.943 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:23.943 1+0 records in 00:16:23.943 1+0 records out 00:16:23.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487783 s, 8.4 MB/s 00:16:23.943 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.200 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:24.200 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.200 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:24.200 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:24.200 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:24.200 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:24.200 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:24.200 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:24.200 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:24.200 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:24.200 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:24.200 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:24.200 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.200 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:24.458 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:24.458 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:24.458 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:24.458 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.458 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.458 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:24.458 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:24.458 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.458 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:24.458 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:24.458 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:24.458 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:24.458 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:24.458 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.458 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78844 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78844 ']' 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78844 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78844 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78844' 00:16:24.718 killing process with pid 78844 00:16:24.718 Received shutdown signal, test time was about 9.730427 seconds 00:16:24.718 00:16:24.718 Latency(us) 00:16:24.718 [2024-12-05T20:09:26.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.718 [2024-12-05T20:09:26.155Z] =================================================================================================================== 00:16:24.718 [2024-12-05T20:09:26.155Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78844 00:16:24.718 [2024-12-05 20:09:25.961135] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.718 20:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78844 00:16:24.977 [2024-12-05 20:09:26.374956] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:26.388 00:16:26.388 real 0m13.206s 00:16:26.388 user 0m16.790s 00:16:26.388 sys 0m1.834s 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.388 ************************************ 00:16:26.388 END TEST raid_rebuild_test_io 00:16:26.388 ************************************ 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.388 20:09:27 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:16:26.388 20:09:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:26.388 20:09:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.388 20:09:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:26.388 ************************************ 00:16:26.388 START TEST raid_rebuild_test_sb_io 00:16:26.388 ************************************ 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79253 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79253 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79253 ']' 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.388 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.388 [2024-12-05 20:09:27.726275] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:16:26.388 [2024-12-05 20:09:27.726468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:26.388 Zero copy mechanism will not be used. 00:16:26.388 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79253 ] 00:16:26.648 [2024-12-05 20:09:27.899121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.648 [2024-12-05 20:09:28.007823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.908 [2024-12-05 20:09:28.210876] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.908 [2024-12-05 20:09:28.210977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.168 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.168 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:16:27.168 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:27.168 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:27.168 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.168 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.168 BaseBdev1_malloc 00:16:27.168 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.168 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:27.168 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.168 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.168 [2024-12-05 20:09:28.599892] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:27.168 [2024-12-05 20:09:28.599963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.168 [2024-12-05 20:09:28.599984] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:27.168 [2024-12-05 20:09:28.599995] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.168 [2024-12-05 20:09:28.602055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.168 [2024-12-05 20:09:28.602144] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:27.429 BaseBdev1 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.429 BaseBdev2_malloc 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.429 [2024-12-05 20:09:28.652751] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:27.429 [2024-12-05 20:09:28.652809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.429 [2024-12-05 20:09:28.652829] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:27.429 [2024-12-05 20:09:28.652839] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.429 [2024-12-05 20:09:28.654944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.429 [2024-12-05 20:09:28.654978] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:27.429 BaseBdev2 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.429 BaseBdev3_malloc 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.429 [2024-12-05 20:09:28.735649] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:27.429 [2024-12-05 20:09:28.735702] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.429 [2024-12-05 20:09:28.735723] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:27.429 [2024-12-05 20:09:28.735733] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.429 [2024-12-05 20:09:28.737865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.429 [2024-12-05 20:09:28.737967] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:27.429 BaseBdev3 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.429 BaseBdev4_malloc 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.429 [2024-12-05 20:09:28.788738] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:27.429 [2024-12-05 20:09:28.788813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.429 [2024-12-05 20:09:28.788834] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:27.429 [2024-12-05 20:09:28.788845] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.429 [2024-12-05 20:09:28.790909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.429 [2024-12-05 20:09:28.790989] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:27.429 BaseBdev4 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.429 spare_malloc 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.429 spare_delay 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.429 [2024-12-05 20:09:28.855013] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:27.429 [2024-12-05 20:09:28.855108] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.429 [2024-12-05 20:09:28.855129] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:27.429 [2024-12-05 20:09:28.855140] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.429 [2024-12-05 20:09:28.857198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.429 [2024-12-05 20:09:28.857237] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:27.429 spare 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.429 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 [2024-12-05 20:09:28.867044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.689 [2024-12-05 20:09:28.868751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:27.689 [2024-12-05 20:09:28.868811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:27.689 [2024-12-05 20:09:28.868862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:27.689 [2024-12-05 20:09:28.869059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:27.689 [2024-12-05 20:09:28.869075] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:27.689 [2024-12-05 20:09:28.869317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:27.689 [2024-12-05 20:09:28.869504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:27.689 [2024-12-05 20:09:28.869514] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:27.689 [2024-12-05 20:09:28.869667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.689 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:27.689 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.689 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.689 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.689 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.689 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.689 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.690 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.690 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.690 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.690 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.690 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.690 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.690 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.690 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.690 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.690 "name": "raid_bdev1", 00:16:27.690 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:27.690 "strip_size_kb": 0, 00:16:27.690 "state": "online", 00:16:27.690 "raid_level": "raid1", 00:16:27.690 "superblock": true, 00:16:27.690 "num_base_bdevs": 4, 00:16:27.690 "num_base_bdevs_discovered": 4, 00:16:27.690 "num_base_bdevs_operational": 4, 00:16:27.690 "base_bdevs_list": [ 00:16:27.690 { 00:16:27.690 "name": "BaseBdev1", 00:16:27.690 "uuid": "9595fb36-0b6a-58d0-8240-2de069416718", 00:16:27.690 "is_configured": true, 00:16:27.690 "data_offset": 2048, 00:16:27.690 "data_size": 63488 00:16:27.690 }, 00:16:27.690 { 00:16:27.690 "name": "BaseBdev2", 00:16:27.690 "uuid": "c5642ff4-b5e7-5dea-8d5c-0636150e38e0", 00:16:27.690 "is_configured": true, 00:16:27.690 "data_offset": 2048, 00:16:27.690 "data_size": 63488 00:16:27.690 }, 00:16:27.690 { 00:16:27.690 "name": "BaseBdev3", 00:16:27.690 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:27.690 "is_configured": true, 00:16:27.690 "data_offset": 2048, 00:16:27.690 "data_size": 63488 00:16:27.690 }, 00:16:27.690 { 00:16:27.690 "name": "BaseBdev4", 00:16:27.690 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:27.690 "is_configured": true, 00:16:27.690 "data_offset": 2048, 00:16:27.690 "data_size": 63488 00:16:27.690 } 00:16:27.690 ] 00:16:27.690 }' 00:16:27.690 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.690 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.950 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:27.950 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:27.950 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.950 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.950 [2024-12-05 20:09:29.306607] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.950 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.950 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:27.950 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.950 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.950 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.950 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:27.950 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.210 [2024-12-05 20:09:29.406090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.210 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.210 "name": "raid_bdev1", 00:16:28.210 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:28.210 "strip_size_kb": 0, 00:16:28.210 "state": "online", 00:16:28.210 "raid_level": "raid1", 00:16:28.210 "superblock": true, 00:16:28.210 "num_base_bdevs": 4, 00:16:28.210 "num_base_bdevs_discovered": 3, 00:16:28.210 "num_base_bdevs_operational": 3, 00:16:28.210 "base_bdevs_list": [ 00:16:28.210 { 00:16:28.210 "name": null, 00:16:28.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.210 "is_configured": false, 00:16:28.210 "data_offset": 0, 00:16:28.210 "data_size": 63488 00:16:28.210 }, 00:16:28.210 { 00:16:28.210 "name": "BaseBdev2", 00:16:28.210 "uuid": "c5642ff4-b5e7-5dea-8d5c-0636150e38e0", 00:16:28.211 "is_configured": true, 00:16:28.211 "data_offset": 2048, 00:16:28.211 "data_size": 63488 00:16:28.211 }, 00:16:28.211 { 00:16:28.211 "name": "BaseBdev3", 00:16:28.211 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:28.211 "is_configured": true, 00:16:28.211 "data_offset": 2048, 00:16:28.211 "data_size": 63488 00:16:28.211 }, 00:16:28.211 { 00:16:28.211 "name": "BaseBdev4", 00:16:28.211 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:28.211 "is_configured": true, 00:16:28.211 "data_offset": 2048, 00:16:28.211 "data_size": 63488 00:16:28.211 } 00:16:28.211 ] 00:16:28.211 }' 00:16:28.211 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.211 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.211 [2024-12-05 20:09:29.501651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:28.211 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:28.211 Zero copy mechanism will not be used. 00:16:28.211 Running I/O for 60 seconds... 00:16:28.470 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:28.470 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.470 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.470 [2024-12-05 20:09:29.805464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:28.470 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.470 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:28.470 [2024-12-05 20:09:29.860408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:28.470 [2024-12-05 20:09:29.862440] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:28.729 [2024-12-05 20:09:29.964995] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:28.729 [2024-12-05 20:09:29.965679] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:28.729 [2024-12-05 20:09:30.074327] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:28.729 [2024-12-05 20:09:30.074720] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:28.989 [2024-12-05 20:09:30.322235] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:29.249 142.00 IOPS, 426.00 MiB/s [2024-12-05T20:09:30.686Z] [2024-12-05 20:09:30.561524] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:29.509 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.509 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.509 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.509 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.509 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.509 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.509 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.509 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.509 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.509 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.509 [2024-12-05 20:09:30.891793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:29.509 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.509 "name": "raid_bdev1", 00:16:29.509 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:29.509 "strip_size_kb": 0, 00:16:29.509 "state": "online", 00:16:29.509 "raid_level": "raid1", 00:16:29.509 "superblock": true, 00:16:29.509 "num_base_bdevs": 4, 00:16:29.509 "num_base_bdevs_discovered": 4, 00:16:29.509 "num_base_bdevs_operational": 4, 00:16:29.509 "process": { 00:16:29.509 "type": "rebuild", 00:16:29.509 "target": "spare", 00:16:29.509 "progress": { 00:16:29.509 "blocks": 12288, 00:16:29.509 "percent": 19 00:16:29.510 } 00:16:29.510 }, 00:16:29.510 "base_bdevs_list": [ 00:16:29.510 { 00:16:29.510 "name": "spare", 00:16:29.510 "uuid": "dc1e8b33-69b5-5e51-8074-cf7d5f97715e", 00:16:29.510 "is_configured": true, 00:16:29.510 "data_offset": 2048, 00:16:29.510 "data_size": 63488 00:16:29.510 }, 00:16:29.510 { 00:16:29.510 "name": "BaseBdev2", 00:16:29.510 "uuid": "c5642ff4-b5e7-5dea-8d5c-0636150e38e0", 00:16:29.510 "is_configured": true, 00:16:29.510 "data_offset": 2048, 00:16:29.510 "data_size": 63488 00:16:29.510 }, 00:16:29.510 { 00:16:29.510 "name": "BaseBdev3", 00:16:29.510 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:29.510 "is_configured": true, 00:16:29.510 "data_offset": 2048, 00:16:29.510 "data_size": 63488 00:16:29.510 }, 00:16:29.510 { 00:16:29.510 "name": "BaseBdev4", 00:16:29.510 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:29.510 "is_configured": true, 00:16:29.510 "data_offset": 2048, 00:16:29.510 "data_size": 63488 00:16:29.510 } 00:16:29.510 ] 00:16:29.510 }' 00:16:29.510 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.510 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.769 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.769 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.769 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:29.769 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.769 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.769 [2024-12-05 20:09:31.004681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.769 [2024-12-05 20:09:31.096457] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:29.769 [2024-12-05 20:09:31.097283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:30.028 [2024-12-05 20:09:31.205559] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:30.028 [2024-12-05 20:09:31.217751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.028 [2024-12-05 20:09:31.217809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:30.028 [2024-12-05 20:09:31.217824] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:30.028 [2024-12-05 20:09:31.236662] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.028 "name": "raid_bdev1", 00:16:30.028 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:30.028 "strip_size_kb": 0, 00:16:30.028 "state": "online", 00:16:30.028 "raid_level": "raid1", 00:16:30.028 "superblock": true, 00:16:30.028 "num_base_bdevs": 4, 00:16:30.028 "num_base_bdevs_discovered": 3, 00:16:30.028 "num_base_bdevs_operational": 3, 00:16:30.028 "base_bdevs_list": [ 00:16:30.028 { 00:16:30.028 "name": null, 00:16:30.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.028 "is_configured": false, 00:16:30.028 "data_offset": 0, 00:16:30.028 "data_size": 63488 00:16:30.028 }, 00:16:30.028 { 00:16:30.028 "name": "BaseBdev2", 00:16:30.028 "uuid": "c5642ff4-b5e7-5dea-8d5c-0636150e38e0", 00:16:30.028 "is_configured": true, 00:16:30.028 "data_offset": 2048, 00:16:30.028 "data_size": 63488 00:16:30.028 }, 00:16:30.028 { 00:16:30.028 "name": "BaseBdev3", 00:16:30.028 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:30.028 "is_configured": true, 00:16:30.028 "data_offset": 2048, 00:16:30.028 "data_size": 63488 00:16:30.028 }, 00:16:30.028 { 00:16:30.028 "name": "BaseBdev4", 00:16:30.028 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:30.028 "is_configured": true, 00:16:30.028 "data_offset": 2048, 00:16:30.028 "data_size": 63488 00:16:30.028 } 00:16:30.028 ] 00:16:30.028 }' 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.028 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.288 130.00 IOPS, 390.00 MiB/s [2024-12-05T20:09:31.725Z] 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:30.288 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.288 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:30.288 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:30.288 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.288 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.288 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.288 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.288 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.288 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.288 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.288 "name": "raid_bdev1", 00:16:30.288 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:30.288 "strip_size_kb": 0, 00:16:30.288 "state": "online", 00:16:30.288 "raid_level": "raid1", 00:16:30.288 "superblock": true, 00:16:30.288 "num_base_bdevs": 4, 00:16:30.288 "num_base_bdevs_discovered": 3, 00:16:30.288 "num_base_bdevs_operational": 3, 00:16:30.288 "base_bdevs_list": [ 00:16:30.288 { 00:16:30.288 "name": null, 00:16:30.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.288 "is_configured": false, 00:16:30.288 "data_offset": 0, 00:16:30.288 "data_size": 63488 00:16:30.288 }, 00:16:30.288 { 00:16:30.288 "name": "BaseBdev2", 00:16:30.288 "uuid": "c5642ff4-b5e7-5dea-8d5c-0636150e38e0", 00:16:30.288 "is_configured": true, 00:16:30.288 "data_offset": 2048, 00:16:30.288 "data_size": 63488 00:16:30.288 }, 00:16:30.288 { 00:16:30.288 "name": "BaseBdev3", 00:16:30.288 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:30.288 "is_configured": true, 00:16:30.288 "data_offset": 2048, 00:16:30.288 "data_size": 63488 00:16:30.288 }, 00:16:30.288 { 00:16:30.288 "name": "BaseBdev4", 00:16:30.288 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:30.288 "is_configured": true, 00:16:30.288 "data_offset": 2048, 00:16:30.288 "data_size": 63488 00:16:30.288 } 00:16:30.288 ] 00:16:30.288 }' 00:16:30.288 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.547 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:30.547 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.548 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:30.548 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:30.548 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.548 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.548 [2024-12-05 20:09:31.807147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:30.548 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.548 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:30.548 [2024-12-05 20:09:31.876548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:30.548 [2024-12-05 20:09:31.878567] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:30.807 [2024-12-05 20:09:31.993581] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:30.807 [2024-12-05 20:09:31.994341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:30.807 [2024-12-05 20:09:32.211035] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:30.807 [2024-12-05 20:09:32.211464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:31.066 [2024-12-05 20:09:32.465740] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:31.325 132.00 IOPS, 396.00 MiB/s [2024-12-05T20:09:32.762Z] [2024-12-05 20:09:32.684547] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:31.325 [2024-12-05 20:09:32.684926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:31.585 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.585 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.585 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.585 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.585 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.585 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.585 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.585 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.585 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.585 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.585 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.585 "name": "raid_bdev1", 00:16:31.585 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:31.585 "strip_size_kb": 0, 00:16:31.585 "state": "online", 00:16:31.585 "raid_level": "raid1", 00:16:31.585 "superblock": true, 00:16:31.585 "num_base_bdevs": 4, 00:16:31.585 "num_base_bdevs_discovered": 4, 00:16:31.585 "num_base_bdevs_operational": 4, 00:16:31.585 "process": { 00:16:31.585 "type": "rebuild", 00:16:31.585 "target": "spare", 00:16:31.585 "progress": { 00:16:31.585 "blocks": 10240, 00:16:31.585 "percent": 16 00:16:31.585 } 00:16:31.585 }, 00:16:31.585 "base_bdevs_list": [ 00:16:31.585 { 00:16:31.585 "name": "spare", 00:16:31.585 "uuid": "dc1e8b33-69b5-5e51-8074-cf7d5f97715e", 00:16:31.585 "is_configured": true, 00:16:31.585 "data_offset": 2048, 00:16:31.585 "data_size": 63488 00:16:31.585 }, 00:16:31.585 { 00:16:31.585 "name": "BaseBdev2", 00:16:31.585 "uuid": "c5642ff4-b5e7-5dea-8d5c-0636150e38e0", 00:16:31.585 "is_configured": true, 00:16:31.585 "data_offset": 2048, 00:16:31.585 "data_size": 63488 00:16:31.585 }, 00:16:31.585 { 00:16:31.585 "name": "BaseBdev3", 00:16:31.585 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:31.585 "is_configured": true, 00:16:31.585 "data_offset": 2048, 00:16:31.585 "data_size": 63488 00:16:31.585 }, 00:16:31.585 { 00:16:31.585 "name": "BaseBdev4", 00:16:31.585 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:31.585 "is_configured": true, 00:16:31.585 "data_offset": 2048, 00:16:31.585 "data_size": 63488 00:16:31.585 } 00:16:31.585 ] 00:16:31.585 }' 00:16:31.585 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.585 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.585 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.845 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.845 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:31.845 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:31.845 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:31.845 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:31.845 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:31.845 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:31.845 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:31.845 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.845 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.845 [2024-12-05 20:09:33.023899] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:31.845 [2024-12-05 20:09:33.026544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:31.845 [2024-12-05 20:09:33.228341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:31.845 [2024-12-05 20:09:33.228808] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:32.105 [2024-12-05 20:09:33.330289] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:32.105 [2024-12-05 20:09:33.330400] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.105 "name": "raid_bdev1", 00:16:32.105 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:32.105 "strip_size_kb": 0, 00:16:32.105 "state": "online", 00:16:32.105 "raid_level": "raid1", 00:16:32.105 "superblock": true, 00:16:32.105 "num_base_bdevs": 4, 00:16:32.105 "num_base_bdevs_discovered": 3, 00:16:32.105 "num_base_bdevs_operational": 3, 00:16:32.105 "process": { 00:16:32.105 "type": "rebuild", 00:16:32.105 "target": "spare", 00:16:32.105 "progress": { 00:16:32.105 "blocks": 16384, 00:16:32.105 "percent": 25 00:16:32.105 } 00:16:32.105 }, 00:16:32.105 "base_bdevs_list": [ 00:16:32.105 { 00:16:32.105 "name": "spare", 00:16:32.105 "uuid": "dc1e8b33-69b5-5e51-8074-cf7d5f97715e", 00:16:32.105 "is_configured": true, 00:16:32.105 "data_offset": 2048, 00:16:32.105 "data_size": 63488 00:16:32.105 }, 00:16:32.105 { 00:16:32.105 "name": null, 00:16:32.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.105 "is_configured": false, 00:16:32.105 "data_offset": 0, 00:16:32.105 "data_size": 63488 00:16:32.105 }, 00:16:32.105 { 00:16:32.105 "name": "BaseBdev3", 00:16:32.105 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:32.105 "is_configured": true, 00:16:32.105 "data_offset": 2048, 00:16:32.105 "data_size": 63488 00:16:32.105 }, 00:16:32.105 { 00:16:32.105 "name": "BaseBdev4", 00:16:32.105 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:32.105 "is_configured": true, 00:16:32.105 "data_offset": 2048, 00:16:32.105 "data_size": 63488 00:16:32.105 } 00:16:32.105 ] 00:16:32.105 }' 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=495 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.105 118.00 IOPS, 354.00 MiB/s [2024-12-05T20:09:33.542Z] 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.105 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.105 "name": "raid_bdev1", 00:16:32.105 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:32.105 "strip_size_kb": 0, 00:16:32.105 "state": "online", 00:16:32.105 "raid_level": "raid1", 00:16:32.105 "superblock": true, 00:16:32.105 "num_base_bdevs": 4, 00:16:32.105 "num_base_bdevs_discovered": 3, 00:16:32.105 "num_base_bdevs_operational": 3, 00:16:32.105 "process": { 00:16:32.105 "type": "rebuild", 00:16:32.105 "target": "spare", 00:16:32.105 "progress": { 00:16:32.106 "blocks": 18432, 00:16:32.106 "percent": 29 00:16:32.106 } 00:16:32.106 }, 00:16:32.106 "base_bdevs_list": [ 00:16:32.106 { 00:16:32.106 "name": "spare", 00:16:32.106 "uuid": "dc1e8b33-69b5-5e51-8074-cf7d5f97715e", 00:16:32.106 "is_configured": true, 00:16:32.106 "data_offset": 2048, 00:16:32.106 "data_size": 63488 00:16:32.106 }, 00:16:32.106 { 00:16:32.106 "name": null, 00:16:32.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.106 "is_configured": false, 00:16:32.106 "data_offset": 0, 00:16:32.106 "data_size": 63488 00:16:32.106 }, 00:16:32.106 { 00:16:32.106 "name": "BaseBdev3", 00:16:32.106 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:32.106 "is_configured": true, 00:16:32.106 "data_offset": 2048, 00:16:32.106 "data_size": 63488 00:16:32.106 }, 00:16:32.106 { 00:16:32.106 "name": "BaseBdev4", 00:16:32.106 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:32.106 "is_configured": true, 00:16:32.106 "data_offset": 2048, 00:16:32.106 "data_size": 63488 00:16:32.106 } 00:16:32.106 ] 00:16:32.106 }' 00:16:32.365 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.365 [2024-12-05 20:09:33.565987] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:32.365 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.365 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.365 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.365 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.365 [2024-12-05 20:09:33.668671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:32.624 [2024-12-05 20:09:34.038673] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:33.191 104.80 IOPS, 314.40 MiB/s [2024-12-05T20:09:34.628Z] [2024-12-05 20:09:34.600241] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:33.451 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.451 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.451 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.451 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.451 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.451 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.451 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.451 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.451 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.451 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.451 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.451 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.451 "name": "raid_bdev1", 00:16:33.451 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:33.451 "strip_size_kb": 0, 00:16:33.451 "state": "online", 00:16:33.451 "raid_level": "raid1", 00:16:33.451 "superblock": true, 00:16:33.451 "num_base_bdevs": 4, 00:16:33.451 "num_base_bdevs_discovered": 3, 00:16:33.451 "num_base_bdevs_operational": 3, 00:16:33.451 "process": { 00:16:33.451 "type": "rebuild", 00:16:33.451 "target": "spare", 00:16:33.451 "progress": { 00:16:33.451 "blocks": 38912, 00:16:33.451 "percent": 61 00:16:33.451 } 00:16:33.451 }, 00:16:33.451 "base_bdevs_list": [ 00:16:33.451 { 00:16:33.451 "name": "spare", 00:16:33.451 "uuid": "dc1e8b33-69b5-5e51-8074-cf7d5f97715e", 00:16:33.451 "is_configured": true, 00:16:33.451 "data_offset": 2048, 00:16:33.451 "data_size": 63488 00:16:33.451 }, 00:16:33.451 { 00:16:33.451 "name": null, 00:16:33.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.451 "is_configured": false, 00:16:33.451 "data_offset": 0, 00:16:33.451 "data_size": 63488 00:16:33.451 }, 00:16:33.451 { 00:16:33.451 "name": "BaseBdev3", 00:16:33.451 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:33.451 "is_configured": true, 00:16:33.451 "data_offset": 2048, 00:16:33.451 "data_size": 63488 00:16:33.451 }, 00:16:33.451 { 00:16:33.451 "name": "BaseBdev4", 00:16:33.451 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:33.451 "is_configured": true, 00:16:33.451 "data_offset": 2048, 00:16:33.451 "data_size": 63488 00:16:33.451 } 00:16:33.451 ] 00:16:33.451 }' 00:16:33.451 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.451 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.451 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.451 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.451 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.711 [2024-12-05 20:09:35.040170] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:34.280 [2024-12-05 20:09:35.479245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:34.539 92.17 IOPS, 276.50 MiB/s [2024-12-05T20:09:35.976Z] 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:34.539 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.539 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.539 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.539 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.539 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.539 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.539 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.539 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.539 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.539 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.539 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.539 "name": "raid_bdev1", 00:16:34.539 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:34.539 "strip_size_kb": 0, 00:16:34.539 "state": "online", 00:16:34.539 "raid_level": "raid1", 00:16:34.539 "superblock": true, 00:16:34.539 "num_base_bdevs": 4, 00:16:34.539 "num_base_bdevs_discovered": 3, 00:16:34.539 "num_base_bdevs_operational": 3, 00:16:34.539 "process": { 00:16:34.539 "type": "rebuild", 00:16:34.539 "target": "spare", 00:16:34.539 "progress": { 00:16:34.539 "blocks": 55296, 00:16:34.539 "percent": 87 00:16:34.539 } 00:16:34.539 }, 00:16:34.539 "base_bdevs_list": [ 00:16:34.539 { 00:16:34.539 "name": "spare", 00:16:34.539 "uuid": "dc1e8b33-69b5-5e51-8074-cf7d5f97715e", 00:16:34.539 "is_configured": true, 00:16:34.539 "data_offset": 2048, 00:16:34.539 "data_size": 63488 00:16:34.539 }, 00:16:34.539 { 00:16:34.539 "name": null, 00:16:34.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.539 "is_configured": false, 00:16:34.539 "data_offset": 0, 00:16:34.539 "data_size": 63488 00:16:34.539 }, 00:16:34.539 { 00:16:34.539 "name": "BaseBdev3", 00:16:34.539 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:34.539 "is_configured": true, 00:16:34.539 "data_offset": 2048, 00:16:34.539 "data_size": 63488 00:16:34.539 }, 00:16:34.539 { 00:16:34.539 "name": "BaseBdev4", 00:16:34.539 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:34.539 "is_configured": true, 00:16:34.539 "data_offset": 2048, 00:16:34.539 "data_size": 63488 00:16:34.539 } 00:16:34.539 ] 00:16:34.539 }' 00:16:34.539 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.539 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.539 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.539 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.539 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:34.798 [2024-12-05 20:09:36.146274] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:35.056 [2024-12-05 20:09:36.251964] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:35.056 [2024-12-05 20:09:36.256345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.573 83.43 IOPS, 250.29 MiB/s [2024-12-05T20:09:37.010Z] 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.573 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.573 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.573 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.573 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.573 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.574 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.574 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.574 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.574 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.574 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.574 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.574 "name": "raid_bdev1", 00:16:35.574 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:35.574 "strip_size_kb": 0, 00:16:35.574 "state": "online", 00:16:35.574 "raid_level": "raid1", 00:16:35.574 "superblock": true, 00:16:35.574 "num_base_bdevs": 4, 00:16:35.574 "num_base_bdevs_discovered": 3, 00:16:35.574 "num_base_bdevs_operational": 3, 00:16:35.574 "base_bdevs_list": [ 00:16:35.574 { 00:16:35.574 "name": "spare", 00:16:35.574 "uuid": "dc1e8b33-69b5-5e51-8074-cf7d5f97715e", 00:16:35.574 "is_configured": true, 00:16:35.574 "data_offset": 2048, 00:16:35.574 "data_size": 63488 00:16:35.574 }, 00:16:35.574 { 00:16:35.574 "name": null, 00:16:35.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.574 "is_configured": false, 00:16:35.574 "data_offset": 0, 00:16:35.574 "data_size": 63488 00:16:35.574 }, 00:16:35.574 { 00:16:35.574 "name": "BaseBdev3", 00:16:35.574 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:35.574 "is_configured": true, 00:16:35.574 "data_offset": 2048, 00:16:35.574 "data_size": 63488 00:16:35.574 }, 00:16:35.574 { 00:16:35.574 "name": "BaseBdev4", 00:16:35.574 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:35.574 "is_configured": true, 00:16:35.574 "data_offset": 2048, 00:16:35.574 "data_size": 63488 00:16:35.574 } 00:16:35.574 ] 00:16:35.574 }' 00:16:35.574 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.833 "name": "raid_bdev1", 00:16:35.833 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:35.833 "strip_size_kb": 0, 00:16:35.833 "state": "online", 00:16:35.833 "raid_level": "raid1", 00:16:35.833 "superblock": true, 00:16:35.833 "num_base_bdevs": 4, 00:16:35.833 "num_base_bdevs_discovered": 3, 00:16:35.833 "num_base_bdevs_operational": 3, 00:16:35.833 "base_bdevs_list": [ 00:16:35.833 { 00:16:35.833 "name": "spare", 00:16:35.833 "uuid": "dc1e8b33-69b5-5e51-8074-cf7d5f97715e", 00:16:35.833 "is_configured": true, 00:16:35.833 "data_offset": 2048, 00:16:35.833 "data_size": 63488 00:16:35.833 }, 00:16:35.833 { 00:16:35.833 "name": null, 00:16:35.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.833 "is_configured": false, 00:16:35.833 "data_offset": 0, 00:16:35.833 "data_size": 63488 00:16:35.833 }, 00:16:35.833 { 00:16:35.833 "name": "BaseBdev3", 00:16:35.833 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:35.833 "is_configured": true, 00:16:35.833 "data_offset": 2048, 00:16:35.833 "data_size": 63488 00:16:35.833 }, 00:16:35.833 { 00:16:35.833 "name": "BaseBdev4", 00:16:35.833 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:35.833 "is_configured": true, 00:16:35.833 "data_offset": 2048, 00:16:35.833 "data_size": 63488 00:16:35.833 } 00:16:35.833 ] 00:16:35.833 }' 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.833 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.834 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.834 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.834 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.834 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.834 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.834 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.834 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.834 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.834 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.834 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.834 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.834 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.093 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.093 "name": "raid_bdev1", 00:16:36.093 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:36.093 "strip_size_kb": 0, 00:16:36.093 "state": "online", 00:16:36.093 "raid_level": "raid1", 00:16:36.093 "superblock": true, 00:16:36.093 "num_base_bdevs": 4, 00:16:36.093 "num_base_bdevs_discovered": 3, 00:16:36.093 "num_base_bdevs_operational": 3, 00:16:36.093 "base_bdevs_list": [ 00:16:36.093 { 00:16:36.093 "name": "spare", 00:16:36.093 "uuid": "dc1e8b33-69b5-5e51-8074-cf7d5f97715e", 00:16:36.093 "is_configured": true, 00:16:36.093 "data_offset": 2048, 00:16:36.093 "data_size": 63488 00:16:36.093 }, 00:16:36.093 { 00:16:36.093 "name": null, 00:16:36.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.093 "is_configured": false, 00:16:36.093 "data_offset": 0, 00:16:36.093 "data_size": 63488 00:16:36.093 }, 00:16:36.093 { 00:16:36.093 "name": "BaseBdev3", 00:16:36.093 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:36.093 "is_configured": true, 00:16:36.093 "data_offset": 2048, 00:16:36.093 "data_size": 63488 00:16:36.093 }, 00:16:36.093 { 00:16:36.093 "name": "BaseBdev4", 00:16:36.093 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:36.093 "is_configured": true, 00:16:36.093 "data_offset": 2048, 00:16:36.093 "data_size": 63488 00:16:36.093 } 00:16:36.093 ] 00:16:36.093 }' 00:16:36.093 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.093 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.353 78.88 IOPS, 236.62 MiB/s [2024-12-05T20:09:37.790Z] 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:36.353 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.353 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.353 [2024-12-05 20:09:37.719880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:36.353 [2024-12-05 20:09:37.719975] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.353 00:16:36.353 Latency(us) 00:16:36.353 [2024-12-05T20:09:37.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.353 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:36.353 raid_bdev1 : 8.28 76.90 230.69 0.00 0.00 18007.68 334.48 117220.72 00:16:36.353 [2024-12-05T20:09:37.790Z] =================================================================================================================== 00:16:36.353 [2024-12-05T20:09:37.790Z] Total : 76.90 230.69 0.00 0.00 18007.68 334.48 117220.72 00:16:36.612 [2024-12-05 20:09:37.793839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.612 [2024-12-05 20:09:37.793986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.612 [2024-12-05 20:09:37.794126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.612 [2024-12-05 20:09:37.794175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:36.612 { 00:16:36.612 "results": [ 00:16:36.612 { 00:16:36.612 "job": "raid_bdev1", 00:16:36.612 "core_mask": "0x1", 00:16:36.612 "workload": "randrw", 00:16:36.612 "percentage": 50, 00:16:36.612 "status": "finished", 00:16:36.612 "queue_depth": 2, 00:16:36.612 "io_size": 3145728, 00:16:36.612 "runtime": 8.283795, 00:16:36.612 "iops": 76.89712263521731, 00:16:36.612 "mibps": 230.69136790565193, 00:16:36.612 "io_failed": 0, 00:16:36.612 "io_timeout": 0, 00:16:36.612 "avg_latency_us": 18007.678065166274, 00:16:36.612 "min_latency_us": 334.4768558951965, 00:16:36.612 "max_latency_us": 117220.7231441048 00:16:36.612 } 00:16:36.612 ], 00:16:36.612 "core_count": 1 00:16:36.612 } 00:16:36.612 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.612 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.612 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.613 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.613 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:36.613 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.613 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:36.613 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:36.613 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:36.613 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:36.613 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:36.613 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:36.613 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:36.613 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:36.613 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:36.613 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:36.613 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:36.613 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:36.613 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:36.872 /dev/nbd0 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:36.872 1+0 records in 00:16:36.872 1+0 records out 00:16:36.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363362 s, 11.3 MB/s 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:36.872 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:37.131 /dev/nbd1 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:37.131 1+0 records in 00:16:37.131 1+0 records out 00:16:37.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048078 s, 8.5 MB/s 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.131 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:37.391 20:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:37.651 /dev/nbd1 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:37.651 1+0 records in 00:16:37.651 1+0 records out 00:16:37.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372023 s, 11.0 MB/s 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:37.651 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:37.910 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:37.911 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:37.911 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:37.911 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.911 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.170 [2024-12-05 20:09:39.563305] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:38.170 [2024-12-05 20:09:39.563361] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.170 [2024-12-05 20:09:39.563398] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:38.170 [2024-12-05 20:09:39.563415] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.170 [2024-12-05 20:09:39.565678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.170 [2024-12-05 20:09:39.565766] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:38.170 [2024-12-05 20:09:39.565912] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:38.170 [2024-12-05 20:09:39.566004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:38.170 [2024-12-05 20:09:39.566201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.170 [2024-12-05 20:09:39.566339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:38.170 spare 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.170 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.438 [2024-12-05 20:09:39.666284] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:38.438 [2024-12-05 20:09:39.666321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:38.438 [2024-12-05 20:09:39.666663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:16:38.438 [2024-12-05 20:09:39.666865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:38.438 [2024-12-05 20:09:39.666878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:38.438 [2024-12-05 20:09:39.667127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.438 "name": "raid_bdev1", 00:16:38.438 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:38.438 "strip_size_kb": 0, 00:16:38.438 "state": "online", 00:16:38.438 "raid_level": "raid1", 00:16:38.438 "superblock": true, 00:16:38.438 "num_base_bdevs": 4, 00:16:38.438 "num_base_bdevs_discovered": 3, 00:16:38.438 "num_base_bdevs_operational": 3, 00:16:38.438 "base_bdevs_list": [ 00:16:38.438 { 00:16:38.438 "name": "spare", 00:16:38.438 "uuid": "dc1e8b33-69b5-5e51-8074-cf7d5f97715e", 00:16:38.438 "is_configured": true, 00:16:38.438 "data_offset": 2048, 00:16:38.438 "data_size": 63488 00:16:38.438 }, 00:16:38.438 { 00:16:38.438 "name": null, 00:16:38.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.438 "is_configured": false, 00:16:38.438 "data_offset": 2048, 00:16:38.438 "data_size": 63488 00:16:38.438 }, 00:16:38.438 { 00:16:38.438 "name": "BaseBdev3", 00:16:38.438 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:38.438 "is_configured": true, 00:16:38.438 "data_offset": 2048, 00:16:38.438 "data_size": 63488 00:16:38.438 }, 00:16:38.438 { 00:16:38.438 "name": "BaseBdev4", 00:16:38.438 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:38.438 "is_configured": true, 00:16:38.438 "data_offset": 2048, 00:16:38.438 "data_size": 63488 00:16:38.438 } 00:16:38.438 ] 00:16:38.438 }' 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.438 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.013 "name": "raid_bdev1", 00:16:39.013 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:39.013 "strip_size_kb": 0, 00:16:39.013 "state": "online", 00:16:39.013 "raid_level": "raid1", 00:16:39.013 "superblock": true, 00:16:39.013 "num_base_bdevs": 4, 00:16:39.013 "num_base_bdevs_discovered": 3, 00:16:39.013 "num_base_bdevs_operational": 3, 00:16:39.013 "base_bdevs_list": [ 00:16:39.013 { 00:16:39.013 "name": "spare", 00:16:39.013 "uuid": "dc1e8b33-69b5-5e51-8074-cf7d5f97715e", 00:16:39.013 "is_configured": true, 00:16:39.013 "data_offset": 2048, 00:16:39.013 "data_size": 63488 00:16:39.013 }, 00:16:39.013 { 00:16:39.013 "name": null, 00:16:39.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.013 "is_configured": false, 00:16:39.013 "data_offset": 2048, 00:16:39.013 "data_size": 63488 00:16:39.013 }, 00:16:39.013 { 00:16:39.013 "name": "BaseBdev3", 00:16:39.013 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:39.013 "is_configured": true, 00:16:39.013 "data_offset": 2048, 00:16:39.013 "data_size": 63488 00:16:39.013 }, 00:16:39.013 { 00:16:39.013 "name": "BaseBdev4", 00:16:39.013 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:39.013 "is_configured": true, 00:16:39.013 "data_offset": 2048, 00:16:39.013 "data_size": 63488 00:16:39.013 } 00:16:39.013 ] 00:16:39.013 }' 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.013 [2024-12-05 20:09:40.350160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.013 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.014 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.014 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.014 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.014 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.014 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.014 "name": "raid_bdev1", 00:16:39.014 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:39.014 "strip_size_kb": 0, 00:16:39.014 "state": "online", 00:16:39.014 "raid_level": "raid1", 00:16:39.014 "superblock": true, 00:16:39.014 "num_base_bdevs": 4, 00:16:39.014 "num_base_bdevs_discovered": 2, 00:16:39.014 "num_base_bdevs_operational": 2, 00:16:39.014 "base_bdevs_list": [ 00:16:39.014 { 00:16:39.014 "name": null, 00:16:39.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.014 "is_configured": false, 00:16:39.014 "data_offset": 0, 00:16:39.014 "data_size": 63488 00:16:39.014 }, 00:16:39.014 { 00:16:39.014 "name": null, 00:16:39.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.014 "is_configured": false, 00:16:39.014 "data_offset": 2048, 00:16:39.014 "data_size": 63488 00:16:39.014 }, 00:16:39.014 { 00:16:39.014 "name": "BaseBdev3", 00:16:39.014 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:39.014 "is_configured": true, 00:16:39.014 "data_offset": 2048, 00:16:39.014 "data_size": 63488 00:16:39.014 }, 00:16:39.014 { 00:16:39.014 "name": "BaseBdev4", 00:16:39.014 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:39.014 "is_configured": true, 00:16:39.014 "data_offset": 2048, 00:16:39.014 "data_size": 63488 00:16:39.014 } 00:16:39.014 ] 00:16:39.014 }' 00:16:39.014 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.014 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.582 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:39.582 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.582 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.583 [2024-12-05 20:09:40.809425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.583 [2024-12-05 20:09:40.809639] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:39.583 [2024-12-05 20:09:40.809655] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:39.583 [2024-12-05 20:09:40.809695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.583 [2024-12-05 20:09:40.824120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:16:39.583 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.583 20:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:39.583 [2024-12-05 20:09:40.826036] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:40.518 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.518 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.518 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.519 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.519 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.519 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.519 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.519 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.519 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.519 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.519 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.519 "name": "raid_bdev1", 00:16:40.519 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:40.519 "strip_size_kb": 0, 00:16:40.519 "state": "online", 00:16:40.519 "raid_level": "raid1", 00:16:40.519 "superblock": true, 00:16:40.519 "num_base_bdevs": 4, 00:16:40.519 "num_base_bdevs_discovered": 3, 00:16:40.519 "num_base_bdevs_operational": 3, 00:16:40.519 "process": { 00:16:40.519 "type": "rebuild", 00:16:40.519 "target": "spare", 00:16:40.519 "progress": { 00:16:40.519 "blocks": 20480, 00:16:40.519 "percent": 32 00:16:40.519 } 00:16:40.519 }, 00:16:40.519 "base_bdevs_list": [ 00:16:40.519 { 00:16:40.519 "name": "spare", 00:16:40.519 "uuid": "dc1e8b33-69b5-5e51-8074-cf7d5f97715e", 00:16:40.519 "is_configured": true, 00:16:40.519 "data_offset": 2048, 00:16:40.519 "data_size": 63488 00:16:40.519 }, 00:16:40.519 { 00:16:40.519 "name": null, 00:16:40.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.519 "is_configured": false, 00:16:40.519 "data_offset": 2048, 00:16:40.519 "data_size": 63488 00:16:40.519 }, 00:16:40.519 { 00:16:40.519 "name": "BaseBdev3", 00:16:40.519 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:40.519 "is_configured": true, 00:16:40.519 "data_offset": 2048, 00:16:40.519 "data_size": 63488 00:16:40.519 }, 00:16:40.519 { 00:16:40.519 "name": "BaseBdev4", 00:16:40.519 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:40.519 "is_configured": true, 00:16:40.519 "data_offset": 2048, 00:16:40.519 "data_size": 63488 00:16:40.519 } 00:16:40.519 ] 00:16:40.519 }' 00:16:40.519 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.519 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.519 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.778 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.778 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:40.778 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.778 20:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.778 [2024-12-05 20:09:41.993554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:40.778 [2024-12-05 20:09:42.031732] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:40.778 [2024-12-05 20:09:42.031796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.778 [2024-12-05 20:09:42.031818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:40.778 [2024-12-05 20:09:42.031825] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.778 "name": "raid_bdev1", 00:16:40.778 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:40.778 "strip_size_kb": 0, 00:16:40.778 "state": "online", 00:16:40.778 "raid_level": "raid1", 00:16:40.778 "superblock": true, 00:16:40.778 "num_base_bdevs": 4, 00:16:40.778 "num_base_bdevs_discovered": 2, 00:16:40.778 "num_base_bdevs_operational": 2, 00:16:40.778 "base_bdevs_list": [ 00:16:40.778 { 00:16:40.778 "name": null, 00:16:40.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.778 "is_configured": false, 00:16:40.778 "data_offset": 0, 00:16:40.778 "data_size": 63488 00:16:40.778 }, 00:16:40.778 { 00:16:40.778 "name": null, 00:16:40.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.778 "is_configured": false, 00:16:40.778 "data_offset": 2048, 00:16:40.778 "data_size": 63488 00:16:40.778 }, 00:16:40.778 { 00:16:40.778 "name": "BaseBdev3", 00:16:40.778 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:40.778 "is_configured": true, 00:16:40.778 "data_offset": 2048, 00:16:40.778 "data_size": 63488 00:16:40.778 }, 00:16:40.778 { 00:16:40.778 "name": "BaseBdev4", 00:16:40.778 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:40.778 "is_configured": true, 00:16:40.778 "data_offset": 2048, 00:16:40.778 "data_size": 63488 00:16:40.778 } 00:16:40.778 ] 00:16:40.778 }' 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.778 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.348 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:41.348 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.348 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.348 [2024-12-05 20:09:42.548279] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:41.348 [2024-12-05 20:09:42.548429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.348 [2024-12-05 20:09:42.548468] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:41.348 [2024-12-05 20:09:42.548479] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.348 [2024-12-05 20:09:42.549024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.348 [2024-12-05 20:09:42.549048] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:41.348 [2024-12-05 20:09:42.549156] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:41.348 [2024-12-05 20:09:42.549170] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:41.348 [2024-12-05 20:09:42.549182] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:41.348 [2024-12-05 20:09:42.549215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.348 [2024-12-05 20:09:42.565655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:16:41.348 spare 00:16:41.348 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.348 20:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:41.348 [2024-12-05 20:09:42.567680] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:42.286 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.286 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.286 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.286 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.286 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.286 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.286 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.286 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.286 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.286 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.286 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.286 "name": "raid_bdev1", 00:16:42.287 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:42.287 "strip_size_kb": 0, 00:16:42.287 "state": "online", 00:16:42.287 "raid_level": "raid1", 00:16:42.287 "superblock": true, 00:16:42.287 "num_base_bdevs": 4, 00:16:42.287 "num_base_bdevs_discovered": 3, 00:16:42.287 "num_base_bdevs_operational": 3, 00:16:42.287 "process": { 00:16:42.287 "type": "rebuild", 00:16:42.287 "target": "spare", 00:16:42.287 "progress": { 00:16:42.287 "blocks": 20480, 00:16:42.287 "percent": 32 00:16:42.287 } 00:16:42.287 }, 00:16:42.287 "base_bdevs_list": [ 00:16:42.287 { 00:16:42.287 "name": "spare", 00:16:42.287 "uuid": "dc1e8b33-69b5-5e51-8074-cf7d5f97715e", 00:16:42.287 "is_configured": true, 00:16:42.287 "data_offset": 2048, 00:16:42.287 "data_size": 63488 00:16:42.287 }, 00:16:42.287 { 00:16:42.287 "name": null, 00:16:42.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.287 "is_configured": false, 00:16:42.287 "data_offset": 2048, 00:16:42.287 "data_size": 63488 00:16:42.287 }, 00:16:42.287 { 00:16:42.287 "name": "BaseBdev3", 00:16:42.287 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:42.287 "is_configured": true, 00:16:42.287 "data_offset": 2048, 00:16:42.287 "data_size": 63488 00:16:42.287 }, 00:16:42.287 { 00:16:42.287 "name": "BaseBdev4", 00:16:42.287 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:42.287 "is_configured": true, 00:16:42.287 "data_offset": 2048, 00:16:42.287 "data_size": 63488 00:16:42.287 } 00:16:42.287 ] 00:16:42.287 }' 00:16:42.287 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.287 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.287 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.287 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.287 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:42.287 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.287 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.287 [2024-12-05 20:09:43.683055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.546 [2024-12-05 20:09:43.772976] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:42.546 [2024-12-05 20:09:43.773097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.546 [2024-12-05 20:09:43.773133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.546 [2024-12-05 20:09:43.773156] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.546 "name": "raid_bdev1", 00:16:42.546 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:42.546 "strip_size_kb": 0, 00:16:42.546 "state": "online", 00:16:42.546 "raid_level": "raid1", 00:16:42.546 "superblock": true, 00:16:42.546 "num_base_bdevs": 4, 00:16:42.546 "num_base_bdevs_discovered": 2, 00:16:42.546 "num_base_bdevs_operational": 2, 00:16:42.546 "base_bdevs_list": [ 00:16:42.546 { 00:16:42.546 "name": null, 00:16:42.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.546 "is_configured": false, 00:16:42.546 "data_offset": 0, 00:16:42.546 "data_size": 63488 00:16:42.546 }, 00:16:42.546 { 00:16:42.546 "name": null, 00:16:42.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.546 "is_configured": false, 00:16:42.546 "data_offset": 2048, 00:16:42.546 "data_size": 63488 00:16:42.546 }, 00:16:42.546 { 00:16:42.546 "name": "BaseBdev3", 00:16:42.546 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:42.546 "is_configured": true, 00:16:42.546 "data_offset": 2048, 00:16:42.546 "data_size": 63488 00:16:42.546 }, 00:16:42.546 { 00:16:42.546 "name": "BaseBdev4", 00:16:42.546 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:42.546 "is_configured": true, 00:16:42.546 "data_offset": 2048, 00:16:42.546 "data_size": 63488 00:16:42.546 } 00:16:42.546 ] 00:16:42.546 }' 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.546 20:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.804 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.804 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.804 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.804 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.804 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.804 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.804 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.804 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.804 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.064 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.064 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.064 "name": "raid_bdev1", 00:16:43.064 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:43.064 "strip_size_kb": 0, 00:16:43.064 "state": "online", 00:16:43.064 "raid_level": "raid1", 00:16:43.064 "superblock": true, 00:16:43.064 "num_base_bdevs": 4, 00:16:43.064 "num_base_bdevs_discovered": 2, 00:16:43.064 "num_base_bdevs_operational": 2, 00:16:43.064 "base_bdevs_list": [ 00:16:43.064 { 00:16:43.064 "name": null, 00:16:43.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.064 "is_configured": false, 00:16:43.064 "data_offset": 0, 00:16:43.064 "data_size": 63488 00:16:43.064 }, 00:16:43.064 { 00:16:43.064 "name": null, 00:16:43.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.064 "is_configured": false, 00:16:43.064 "data_offset": 2048, 00:16:43.064 "data_size": 63488 00:16:43.064 }, 00:16:43.064 { 00:16:43.064 "name": "BaseBdev3", 00:16:43.064 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:43.064 "is_configured": true, 00:16:43.064 "data_offset": 2048, 00:16:43.064 "data_size": 63488 00:16:43.064 }, 00:16:43.064 { 00:16:43.064 "name": "BaseBdev4", 00:16:43.064 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:43.064 "is_configured": true, 00:16:43.064 "data_offset": 2048, 00:16:43.064 "data_size": 63488 00:16:43.064 } 00:16:43.064 ] 00:16:43.064 }' 00:16:43.064 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.064 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.064 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.064 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.064 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:43.064 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.064 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.064 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.064 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:43.064 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.064 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.064 [2024-12-05 20:09:44.400674] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:43.064 [2024-12-05 20:09:44.400731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.064 [2024-12-05 20:09:44.400751] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:16:43.064 [2024-12-05 20:09:44.400761] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.064 [2024-12-05 20:09:44.401226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.064 [2024-12-05 20:09:44.401246] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:43.064 [2024-12-05 20:09:44.401327] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:43.064 [2024-12-05 20:09:44.401345] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:43.064 [2024-12-05 20:09:44.401352] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:43.064 [2024-12-05 20:09:44.401367] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:43.064 BaseBdev1 00:16:43.064 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.064 20:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:44.002 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:44.002 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.002 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.002 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.002 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.002 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.002 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.002 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.002 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.002 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.002 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.002 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.002 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.002 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.002 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.260 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.261 "name": "raid_bdev1", 00:16:44.261 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:44.261 "strip_size_kb": 0, 00:16:44.261 "state": "online", 00:16:44.261 "raid_level": "raid1", 00:16:44.261 "superblock": true, 00:16:44.261 "num_base_bdevs": 4, 00:16:44.261 "num_base_bdevs_discovered": 2, 00:16:44.261 "num_base_bdevs_operational": 2, 00:16:44.261 "base_bdevs_list": [ 00:16:44.261 { 00:16:44.261 "name": null, 00:16:44.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.261 "is_configured": false, 00:16:44.261 "data_offset": 0, 00:16:44.261 "data_size": 63488 00:16:44.261 }, 00:16:44.261 { 00:16:44.261 "name": null, 00:16:44.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.261 "is_configured": false, 00:16:44.261 "data_offset": 2048, 00:16:44.261 "data_size": 63488 00:16:44.261 }, 00:16:44.261 { 00:16:44.261 "name": "BaseBdev3", 00:16:44.261 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:44.261 "is_configured": true, 00:16:44.261 "data_offset": 2048, 00:16:44.261 "data_size": 63488 00:16:44.261 }, 00:16:44.261 { 00:16:44.261 "name": "BaseBdev4", 00:16:44.261 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:44.261 "is_configured": true, 00:16:44.261 "data_offset": 2048, 00:16:44.261 "data_size": 63488 00:16:44.261 } 00:16:44.261 ] 00:16:44.261 }' 00:16:44.261 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.261 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.518 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:44.518 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.518 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:44.518 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:44.518 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.518 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.518 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.518 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.518 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.518 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.518 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.518 "name": "raid_bdev1", 00:16:44.518 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:44.518 "strip_size_kb": 0, 00:16:44.518 "state": "online", 00:16:44.518 "raid_level": "raid1", 00:16:44.518 "superblock": true, 00:16:44.518 "num_base_bdevs": 4, 00:16:44.518 "num_base_bdevs_discovered": 2, 00:16:44.518 "num_base_bdevs_operational": 2, 00:16:44.518 "base_bdevs_list": [ 00:16:44.518 { 00:16:44.518 "name": null, 00:16:44.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.518 "is_configured": false, 00:16:44.518 "data_offset": 0, 00:16:44.518 "data_size": 63488 00:16:44.518 }, 00:16:44.518 { 00:16:44.518 "name": null, 00:16:44.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.518 "is_configured": false, 00:16:44.518 "data_offset": 2048, 00:16:44.518 "data_size": 63488 00:16:44.518 }, 00:16:44.518 { 00:16:44.518 "name": "BaseBdev3", 00:16:44.518 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:44.518 "is_configured": true, 00:16:44.518 "data_offset": 2048, 00:16:44.518 "data_size": 63488 00:16:44.518 }, 00:16:44.518 { 00:16:44.518 "name": "BaseBdev4", 00:16:44.518 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:44.518 "is_configured": true, 00:16:44.518 "data_offset": 2048, 00:16:44.518 "data_size": 63488 00:16:44.518 } 00:16:44.518 ] 00:16:44.518 }' 00:16:44.518 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.776 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:44.776 20:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.776 20:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:44.776 20:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:44.776 20:09:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:16:44.776 20:09:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:44.776 20:09:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:44.776 20:09:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.776 20:09:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:44.776 20:09:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.776 20:09:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:44.776 20:09:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.776 20:09:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.776 [2024-12-05 20:09:46.046147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:44.776 [2024-12-05 20:09:46.046312] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:44.776 [2024-12-05 20:09:46.046324] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:44.776 request: 00:16:44.776 { 00:16:44.776 "base_bdev": "BaseBdev1", 00:16:44.776 "raid_bdev": "raid_bdev1", 00:16:44.776 "method": "bdev_raid_add_base_bdev", 00:16:44.776 "req_id": 1 00:16:44.776 } 00:16:44.776 Got JSON-RPC error response 00:16:44.776 response: 00:16:44.776 { 00:16:44.776 "code": -22, 00:16:44.776 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:44.776 } 00:16:44.776 20:09:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:44.776 20:09:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:16:44.776 20:09:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:44.776 20:09:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:44.776 20:09:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:44.776 20:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.711 "name": "raid_bdev1", 00:16:45.711 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:45.711 "strip_size_kb": 0, 00:16:45.711 "state": "online", 00:16:45.711 "raid_level": "raid1", 00:16:45.711 "superblock": true, 00:16:45.711 "num_base_bdevs": 4, 00:16:45.711 "num_base_bdevs_discovered": 2, 00:16:45.711 "num_base_bdevs_operational": 2, 00:16:45.711 "base_bdevs_list": [ 00:16:45.711 { 00:16:45.711 "name": null, 00:16:45.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.711 "is_configured": false, 00:16:45.711 "data_offset": 0, 00:16:45.711 "data_size": 63488 00:16:45.711 }, 00:16:45.711 { 00:16:45.711 "name": null, 00:16:45.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.711 "is_configured": false, 00:16:45.711 "data_offset": 2048, 00:16:45.711 "data_size": 63488 00:16:45.711 }, 00:16:45.711 { 00:16:45.711 "name": "BaseBdev3", 00:16:45.711 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:45.711 "is_configured": true, 00:16:45.711 "data_offset": 2048, 00:16:45.711 "data_size": 63488 00:16:45.711 }, 00:16:45.711 { 00:16:45.711 "name": "BaseBdev4", 00:16:45.711 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:45.711 "is_configured": true, 00:16:45.711 "data_offset": 2048, 00:16:45.711 "data_size": 63488 00:16:45.711 } 00:16:45.711 ] 00:16:45.711 }' 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.711 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.279 "name": "raid_bdev1", 00:16:46.279 "uuid": "0329e44c-a839-4011-8179-5f6a0916aaf9", 00:16:46.279 "strip_size_kb": 0, 00:16:46.279 "state": "online", 00:16:46.279 "raid_level": "raid1", 00:16:46.279 "superblock": true, 00:16:46.279 "num_base_bdevs": 4, 00:16:46.279 "num_base_bdevs_discovered": 2, 00:16:46.279 "num_base_bdevs_operational": 2, 00:16:46.279 "base_bdevs_list": [ 00:16:46.279 { 00:16:46.279 "name": null, 00:16:46.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.279 "is_configured": false, 00:16:46.279 "data_offset": 0, 00:16:46.279 "data_size": 63488 00:16:46.279 }, 00:16:46.279 { 00:16:46.279 "name": null, 00:16:46.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.279 "is_configured": false, 00:16:46.279 "data_offset": 2048, 00:16:46.279 "data_size": 63488 00:16:46.279 }, 00:16:46.279 { 00:16:46.279 "name": "BaseBdev3", 00:16:46.279 "uuid": "d3557bf5-1bb1-5ee8-b78f-f7ef0623b559", 00:16:46.279 "is_configured": true, 00:16:46.279 "data_offset": 2048, 00:16:46.279 "data_size": 63488 00:16:46.279 }, 00:16:46.279 { 00:16:46.279 "name": "BaseBdev4", 00:16:46.279 "uuid": "e5c416a4-bd80-5f4d-9077-aaca056917fe", 00:16:46.279 "is_configured": true, 00:16:46.279 "data_offset": 2048, 00:16:46.279 "data_size": 63488 00:16:46.279 } 00:16:46.279 ] 00:16:46.279 }' 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79253 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79253 ']' 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79253 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.279 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79253 00:16:46.538 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.538 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.538 killing process with pid 79253 00:16:46.538 Received shutdown signal, test time was about 18.254547 seconds 00:16:46.538 00:16:46.538 Latency(us) 00:16:46.538 [2024-12-05T20:09:47.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.538 [2024-12-05T20:09:47.975Z] =================================================================================================================== 00:16:46.538 [2024-12-05T20:09:47.975Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:46.538 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79253' 00:16:46.538 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79253 00:16:46.538 [2024-12-05 20:09:47.723311] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.538 20:09:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79253 00:16:46.538 [2024-12-05 20:09:47.723454] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.538 [2024-12-05 20:09:47.723528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.538 [2024-12-05 20:09:47.723538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:46.797 [2024-12-05 20:09:48.123462] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.177 20:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:48.177 00:16:48.177 real 0m21.632s 00:16:48.177 user 0m28.366s 00:16:48.177 sys 0m2.542s 00:16:48.177 ************************************ 00:16:48.177 END TEST raid_rebuild_test_sb_io 00:16:48.177 ************************************ 00:16:48.177 20:09:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.177 20:09:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.177 20:09:49 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:48.177 20:09:49 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:16:48.177 20:09:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:48.177 20:09:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.177 20:09:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.177 ************************************ 00:16:48.177 START TEST raid5f_state_function_test 00:16:48.177 ************************************ 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79976 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79976' 00:16:48.177 Process raid pid: 79976 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79976 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79976 ']' 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.177 20:09:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.178 20:09:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.178 20:09:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.178 20:09:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.178 [2024-12-05 20:09:49.424401] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:16:48.178 [2024-12-05 20:09:49.424616] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.178 [2024-12-05 20:09:49.585593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.437 [2024-12-05 20:09:49.692993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.696 [2024-12-05 20:09:49.892162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.696 [2024-12-05 20:09:49.892250] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.956 [2024-12-05 20:09:50.251606] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:48.956 [2024-12-05 20:09:50.251736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:48.956 [2024-12-05 20:09:50.251769] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:48.956 [2024-12-05 20:09:50.251792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:48.956 [2024-12-05 20:09:50.251810] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:48.956 [2024-12-05 20:09:50.251831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.956 "name": "Existed_Raid", 00:16:48.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.956 "strip_size_kb": 64, 00:16:48.956 "state": "configuring", 00:16:48.956 "raid_level": "raid5f", 00:16:48.956 "superblock": false, 00:16:48.956 "num_base_bdevs": 3, 00:16:48.956 "num_base_bdevs_discovered": 0, 00:16:48.956 "num_base_bdevs_operational": 3, 00:16:48.956 "base_bdevs_list": [ 00:16:48.956 { 00:16:48.956 "name": "BaseBdev1", 00:16:48.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.956 "is_configured": false, 00:16:48.956 "data_offset": 0, 00:16:48.956 "data_size": 0 00:16:48.956 }, 00:16:48.956 { 00:16:48.956 "name": "BaseBdev2", 00:16:48.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.956 "is_configured": false, 00:16:48.956 "data_offset": 0, 00:16:48.956 "data_size": 0 00:16:48.956 }, 00:16:48.956 { 00:16:48.956 "name": "BaseBdev3", 00:16:48.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.956 "is_configured": false, 00:16:48.956 "data_offset": 0, 00:16:48.956 "data_size": 0 00:16:48.956 } 00:16:48.956 ] 00:16:48.956 }' 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.956 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.525 [2024-12-05 20:09:50.706762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:49.525 [2024-12-05 20:09:50.706835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.525 [2024-12-05 20:09:50.718748] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.525 [2024-12-05 20:09:50.718822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.525 [2024-12-05 20:09:50.718865] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.525 [2024-12-05 20:09:50.718878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:49.525 [2024-12-05 20:09:50.718884] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:49.525 [2024-12-05 20:09:50.718893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.525 [2024-12-05 20:09:50.761469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:49.525 BaseBdev1 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.525 [ 00:16:49.525 { 00:16:49.525 "name": "BaseBdev1", 00:16:49.525 "aliases": [ 00:16:49.525 "19efd065-e800-488b-9a56-c78b5b851052" 00:16:49.525 ], 00:16:49.525 "product_name": "Malloc disk", 00:16:49.525 "block_size": 512, 00:16:49.525 "num_blocks": 65536, 00:16:49.525 "uuid": "19efd065-e800-488b-9a56-c78b5b851052", 00:16:49.525 "assigned_rate_limits": { 00:16:49.525 "rw_ios_per_sec": 0, 00:16:49.525 "rw_mbytes_per_sec": 0, 00:16:49.525 "r_mbytes_per_sec": 0, 00:16:49.525 "w_mbytes_per_sec": 0 00:16:49.525 }, 00:16:49.525 "claimed": true, 00:16:49.525 "claim_type": "exclusive_write", 00:16:49.525 "zoned": false, 00:16:49.525 "supported_io_types": { 00:16:49.525 "read": true, 00:16:49.525 "write": true, 00:16:49.525 "unmap": true, 00:16:49.525 "flush": true, 00:16:49.525 "reset": true, 00:16:49.525 "nvme_admin": false, 00:16:49.525 "nvme_io": false, 00:16:49.525 "nvme_io_md": false, 00:16:49.525 "write_zeroes": true, 00:16:49.525 "zcopy": true, 00:16:49.525 "get_zone_info": false, 00:16:49.525 "zone_management": false, 00:16:49.525 "zone_append": false, 00:16:49.525 "compare": false, 00:16:49.525 "compare_and_write": false, 00:16:49.525 "abort": true, 00:16:49.525 "seek_hole": false, 00:16:49.525 "seek_data": false, 00:16:49.525 "copy": true, 00:16:49.525 "nvme_iov_md": false 00:16:49.525 }, 00:16:49.525 "memory_domains": [ 00:16:49.525 { 00:16:49.525 "dma_device_id": "system", 00:16:49.525 "dma_device_type": 1 00:16:49.525 }, 00:16:49.525 { 00:16:49.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.525 "dma_device_type": 2 00:16:49.525 } 00:16:49.525 ], 00:16:49.525 "driver_specific": {} 00:16:49.525 } 00:16:49.525 ] 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.525 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.525 "name": "Existed_Raid", 00:16:49.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.525 "strip_size_kb": 64, 00:16:49.525 "state": "configuring", 00:16:49.526 "raid_level": "raid5f", 00:16:49.526 "superblock": false, 00:16:49.526 "num_base_bdevs": 3, 00:16:49.526 "num_base_bdevs_discovered": 1, 00:16:49.526 "num_base_bdevs_operational": 3, 00:16:49.526 "base_bdevs_list": [ 00:16:49.526 { 00:16:49.526 "name": "BaseBdev1", 00:16:49.526 "uuid": "19efd065-e800-488b-9a56-c78b5b851052", 00:16:49.526 "is_configured": true, 00:16:49.526 "data_offset": 0, 00:16:49.526 "data_size": 65536 00:16:49.526 }, 00:16:49.526 { 00:16:49.526 "name": "BaseBdev2", 00:16:49.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.526 "is_configured": false, 00:16:49.526 "data_offset": 0, 00:16:49.526 "data_size": 0 00:16:49.526 }, 00:16:49.526 { 00:16:49.526 "name": "BaseBdev3", 00:16:49.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.526 "is_configured": false, 00:16:49.526 "data_offset": 0, 00:16:49.526 "data_size": 0 00:16:49.526 } 00:16:49.526 ] 00:16:49.526 }' 00:16:49.526 20:09:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.526 20:09:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.094 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:50.094 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.094 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.094 [2024-12-05 20:09:51.280656] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:50.094 [2024-12-05 20:09:51.280706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:50.094 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.094 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:50.094 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.094 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.094 [2024-12-05 20:09:51.288691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.094 [2024-12-05 20:09:51.290572] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:50.094 [2024-12-05 20:09:51.290667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.094 [2024-12-05 20:09:51.290703] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:50.094 [2024-12-05 20:09:51.290728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.095 "name": "Existed_Raid", 00:16:50.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.095 "strip_size_kb": 64, 00:16:50.095 "state": "configuring", 00:16:50.095 "raid_level": "raid5f", 00:16:50.095 "superblock": false, 00:16:50.095 "num_base_bdevs": 3, 00:16:50.095 "num_base_bdevs_discovered": 1, 00:16:50.095 "num_base_bdevs_operational": 3, 00:16:50.095 "base_bdevs_list": [ 00:16:50.095 { 00:16:50.095 "name": "BaseBdev1", 00:16:50.095 "uuid": "19efd065-e800-488b-9a56-c78b5b851052", 00:16:50.095 "is_configured": true, 00:16:50.095 "data_offset": 0, 00:16:50.095 "data_size": 65536 00:16:50.095 }, 00:16:50.095 { 00:16:50.095 "name": "BaseBdev2", 00:16:50.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.095 "is_configured": false, 00:16:50.095 "data_offset": 0, 00:16:50.095 "data_size": 0 00:16:50.095 }, 00:16:50.095 { 00:16:50.095 "name": "BaseBdev3", 00:16:50.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.095 "is_configured": false, 00:16:50.095 "data_offset": 0, 00:16:50.095 "data_size": 0 00:16:50.095 } 00:16:50.095 ] 00:16:50.095 }' 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.095 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.354 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:50.354 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.354 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.354 [2024-12-05 20:09:51.752396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.354 BaseBdev2 00:16:50.354 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.354 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:50.355 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:50.355 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:50.355 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:50.355 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:50.355 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:50.355 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:50.355 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.355 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.355 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.355 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:50.355 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.355 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.355 [ 00:16:50.355 { 00:16:50.355 "name": "BaseBdev2", 00:16:50.355 "aliases": [ 00:16:50.355 "f19f6974-fe77-4e10-9996-b21b7505f306" 00:16:50.355 ], 00:16:50.355 "product_name": "Malloc disk", 00:16:50.355 "block_size": 512, 00:16:50.355 "num_blocks": 65536, 00:16:50.355 "uuid": "f19f6974-fe77-4e10-9996-b21b7505f306", 00:16:50.355 "assigned_rate_limits": { 00:16:50.355 "rw_ios_per_sec": 0, 00:16:50.355 "rw_mbytes_per_sec": 0, 00:16:50.355 "r_mbytes_per_sec": 0, 00:16:50.355 "w_mbytes_per_sec": 0 00:16:50.355 }, 00:16:50.355 "claimed": true, 00:16:50.355 "claim_type": "exclusive_write", 00:16:50.355 "zoned": false, 00:16:50.355 "supported_io_types": { 00:16:50.355 "read": true, 00:16:50.355 "write": true, 00:16:50.355 "unmap": true, 00:16:50.355 "flush": true, 00:16:50.355 "reset": true, 00:16:50.355 "nvme_admin": false, 00:16:50.355 "nvme_io": false, 00:16:50.355 "nvme_io_md": false, 00:16:50.355 "write_zeroes": true, 00:16:50.355 "zcopy": true, 00:16:50.355 "get_zone_info": false, 00:16:50.355 "zone_management": false, 00:16:50.355 "zone_append": false, 00:16:50.355 "compare": false, 00:16:50.355 "compare_and_write": false, 00:16:50.355 "abort": true, 00:16:50.355 "seek_hole": false, 00:16:50.355 "seek_data": false, 00:16:50.355 "copy": true, 00:16:50.355 "nvme_iov_md": false 00:16:50.355 }, 00:16:50.355 "memory_domains": [ 00:16:50.355 { 00:16:50.355 "dma_device_id": "system", 00:16:50.355 "dma_device_type": 1 00:16:50.355 }, 00:16:50.355 { 00:16:50.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.355 "dma_device_type": 2 00:16:50.355 } 00:16:50.355 ], 00:16:50.355 "driver_specific": {} 00:16:50.355 } 00:16:50.355 ] 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.614 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.614 "name": "Existed_Raid", 00:16:50.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.614 "strip_size_kb": 64, 00:16:50.614 "state": "configuring", 00:16:50.614 "raid_level": "raid5f", 00:16:50.614 "superblock": false, 00:16:50.614 "num_base_bdevs": 3, 00:16:50.614 "num_base_bdevs_discovered": 2, 00:16:50.614 "num_base_bdevs_operational": 3, 00:16:50.614 "base_bdevs_list": [ 00:16:50.614 { 00:16:50.614 "name": "BaseBdev1", 00:16:50.614 "uuid": "19efd065-e800-488b-9a56-c78b5b851052", 00:16:50.614 "is_configured": true, 00:16:50.614 "data_offset": 0, 00:16:50.614 "data_size": 65536 00:16:50.614 }, 00:16:50.614 { 00:16:50.614 "name": "BaseBdev2", 00:16:50.614 "uuid": "f19f6974-fe77-4e10-9996-b21b7505f306", 00:16:50.614 "is_configured": true, 00:16:50.614 "data_offset": 0, 00:16:50.614 "data_size": 65536 00:16:50.614 }, 00:16:50.614 { 00:16:50.614 "name": "BaseBdev3", 00:16:50.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.614 "is_configured": false, 00:16:50.614 "data_offset": 0, 00:16:50.614 "data_size": 0 00:16:50.614 } 00:16:50.614 ] 00:16:50.614 }' 00:16:50.615 20:09:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.615 20:09:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.874 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:50.874 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.874 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.133 [2024-12-05 20:09:52.327511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:51.133 [2024-12-05 20:09:52.327577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:51.134 [2024-12-05 20:09:52.327592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:51.134 [2024-12-05 20:09:52.327834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:51.134 [2024-12-05 20:09:52.333052] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:51.134 [2024-12-05 20:09:52.333073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:51.134 [2024-12-05 20:09:52.333350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.134 BaseBdev3 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.134 [ 00:16:51.134 { 00:16:51.134 "name": "BaseBdev3", 00:16:51.134 "aliases": [ 00:16:51.134 "fdf4e4be-6b41-4484-aa43-1a1a76cc48ef" 00:16:51.134 ], 00:16:51.134 "product_name": "Malloc disk", 00:16:51.134 "block_size": 512, 00:16:51.134 "num_blocks": 65536, 00:16:51.134 "uuid": "fdf4e4be-6b41-4484-aa43-1a1a76cc48ef", 00:16:51.134 "assigned_rate_limits": { 00:16:51.134 "rw_ios_per_sec": 0, 00:16:51.134 "rw_mbytes_per_sec": 0, 00:16:51.134 "r_mbytes_per_sec": 0, 00:16:51.134 "w_mbytes_per_sec": 0 00:16:51.134 }, 00:16:51.134 "claimed": true, 00:16:51.134 "claim_type": "exclusive_write", 00:16:51.134 "zoned": false, 00:16:51.134 "supported_io_types": { 00:16:51.134 "read": true, 00:16:51.134 "write": true, 00:16:51.134 "unmap": true, 00:16:51.134 "flush": true, 00:16:51.134 "reset": true, 00:16:51.134 "nvme_admin": false, 00:16:51.134 "nvme_io": false, 00:16:51.134 "nvme_io_md": false, 00:16:51.134 "write_zeroes": true, 00:16:51.134 "zcopy": true, 00:16:51.134 "get_zone_info": false, 00:16:51.134 "zone_management": false, 00:16:51.134 "zone_append": false, 00:16:51.134 "compare": false, 00:16:51.134 "compare_and_write": false, 00:16:51.134 "abort": true, 00:16:51.134 "seek_hole": false, 00:16:51.134 "seek_data": false, 00:16:51.134 "copy": true, 00:16:51.134 "nvme_iov_md": false 00:16:51.134 }, 00:16:51.134 "memory_domains": [ 00:16:51.134 { 00:16:51.134 "dma_device_id": "system", 00:16:51.134 "dma_device_type": 1 00:16:51.134 }, 00:16:51.134 { 00:16:51.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.134 "dma_device_type": 2 00:16:51.134 } 00:16:51.134 ], 00:16:51.134 "driver_specific": {} 00:16:51.134 } 00:16:51.134 ] 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.134 "name": "Existed_Raid", 00:16:51.134 "uuid": "b7c146bb-dee0-4104-8560-31e58bd7b86b", 00:16:51.134 "strip_size_kb": 64, 00:16:51.134 "state": "online", 00:16:51.134 "raid_level": "raid5f", 00:16:51.134 "superblock": false, 00:16:51.134 "num_base_bdevs": 3, 00:16:51.134 "num_base_bdevs_discovered": 3, 00:16:51.134 "num_base_bdevs_operational": 3, 00:16:51.134 "base_bdevs_list": [ 00:16:51.134 { 00:16:51.134 "name": "BaseBdev1", 00:16:51.134 "uuid": "19efd065-e800-488b-9a56-c78b5b851052", 00:16:51.134 "is_configured": true, 00:16:51.134 "data_offset": 0, 00:16:51.134 "data_size": 65536 00:16:51.134 }, 00:16:51.134 { 00:16:51.134 "name": "BaseBdev2", 00:16:51.134 "uuid": "f19f6974-fe77-4e10-9996-b21b7505f306", 00:16:51.134 "is_configured": true, 00:16:51.134 "data_offset": 0, 00:16:51.134 "data_size": 65536 00:16:51.134 }, 00:16:51.134 { 00:16:51.134 "name": "BaseBdev3", 00:16:51.134 "uuid": "fdf4e4be-6b41-4484-aa43-1a1a76cc48ef", 00:16:51.134 "is_configured": true, 00:16:51.134 "data_offset": 0, 00:16:51.134 "data_size": 65536 00:16:51.134 } 00:16:51.134 ] 00:16:51.134 }' 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.134 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.394 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:51.394 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:51.394 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:51.394 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:51.394 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:51.394 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:51.394 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:51.394 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:51.394 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.394 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.394 [2024-12-05 20:09:52.810764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.654 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.654 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:51.654 "name": "Existed_Raid", 00:16:51.654 "aliases": [ 00:16:51.654 "b7c146bb-dee0-4104-8560-31e58bd7b86b" 00:16:51.654 ], 00:16:51.654 "product_name": "Raid Volume", 00:16:51.654 "block_size": 512, 00:16:51.654 "num_blocks": 131072, 00:16:51.654 "uuid": "b7c146bb-dee0-4104-8560-31e58bd7b86b", 00:16:51.654 "assigned_rate_limits": { 00:16:51.654 "rw_ios_per_sec": 0, 00:16:51.654 "rw_mbytes_per_sec": 0, 00:16:51.654 "r_mbytes_per_sec": 0, 00:16:51.654 "w_mbytes_per_sec": 0 00:16:51.654 }, 00:16:51.654 "claimed": false, 00:16:51.654 "zoned": false, 00:16:51.654 "supported_io_types": { 00:16:51.654 "read": true, 00:16:51.654 "write": true, 00:16:51.654 "unmap": false, 00:16:51.654 "flush": false, 00:16:51.654 "reset": true, 00:16:51.654 "nvme_admin": false, 00:16:51.654 "nvme_io": false, 00:16:51.654 "nvme_io_md": false, 00:16:51.654 "write_zeroes": true, 00:16:51.654 "zcopy": false, 00:16:51.654 "get_zone_info": false, 00:16:51.654 "zone_management": false, 00:16:51.654 "zone_append": false, 00:16:51.654 "compare": false, 00:16:51.654 "compare_and_write": false, 00:16:51.654 "abort": false, 00:16:51.654 "seek_hole": false, 00:16:51.654 "seek_data": false, 00:16:51.654 "copy": false, 00:16:51.654 "nvme_iov_md": false 00:16:51.654 }, 00:16:51.654 "driver_specific": { 00:16:51.654 "raid": { 00:16:51.654 "uuid": "b7c146bb-dee0-4104-8560-31e58bd7b86b", 00:16:51.654 "strip_size_kb": 64, 00:16:51.654 "state": "online", 00:16:51.654 "raid_level": "raid5f", 00:16:51.654 "superblock": false, 00:16:51.654 "num_base_bdevs": 3, 00:16:51.654 "num_base_bdevs_discovered": 3, 00:16:51.654 "num_base_bdevs_operational": 3, 00:16:51.654 "base_bdevs_list": [ 00:16:51.654 { 00:16:51.654 "name": "BaseBdev1", 00:16:51.654 "uuid": "19efd065-e800-488b-9a56-c78b5b851052", 00:16:51.654 "is_configured": true, 00:16:51.654 "data_offset": 0, 00:16:51.654 "data_size": 65536 00:16:51.654 }, 00:16:51.654 { 00:16:51.654 "name": "BaseBdev2", 00:16:51.654 "uuid": "f19f6974-fe77-4e10-9996-b21b7505f306", 00:16:51.654 "is_configured": true, 00:16:51.654 "data_offset": 0, 00:16:51.654 "data_size": 65536 00:16:51.654 }, 00:16:51.654 { 00:16:51.654 "name": "BaseBdev3", 00:16:51.654 "uuid": "fdf4e4be-6b41-4484-aa43-1a1a76cc48ef", 00:16:51.654 "is_configured": true, 00:16:51.654 "data_offset": 0, 00:16:51.654 "data_size": 65536 00:16:51.654 } 00:16:51.654 ] 00:16:51.654 } 00:16:51.654 } 00:16:51.654 }' 00:16:51.654 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:51.654 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:51.654 BaseBdev2 00:16:51.654 BaseBdev3' 00:16:51.654 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.654 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:51.654 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.654 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:51.654 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.654 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.654 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.655 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.655 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.655 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.655 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.655 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:51.655 20:09:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.655 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.655 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.655 20:09:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.655 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.655 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.655 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.655 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.655 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:51.655 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.655 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.655 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.655 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.655 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.655 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:51.655 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.655 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.655 [2024-12-05 20:09:53.050165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.915 "name": "Existed_Raid", 00:16:51.915 "uuid": "b7c146bb-dee0-4104-8560-31e58bd7b86b", 00:16:51.915 "strip_size_kb": 64, 00:16:51.915 "state": "online", 00:16:51.915 "raid_level": "raid5f", 00:16:51.915 "superblock": false, 00:16:51.915 "num_base_bdevs": 3, 00:16:51.915 "num_base_bdevs_discovered": 2, 00:16:51.915 "num_base_bdevs_operational": 2, 00:16:51.915 "base_bdevs_list": [ 00:16:51.915 { 00:16:51.915 "name": null, 00:16:51.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.915 "is_configured": false, 00:16:51.915 "data_offset": 0, 00:16:51.915 "data_size": 65536 00:16:51.915 }, 00:16:51.915 { 00:16:51.915 "name": "BaseBdev2", 00:16:51.915 "uuid": "f19f6974-fe77-4e10-9996-b21b7505f306", 00:16:51.915 "is_configured": true, 00:16:51.915 "data_offset": 0, 00:16:51.915 "data_size": 65536 00:16:51.915 }, 00:16:51.915 { 00:16:51.915 "name": "BaseBdev3", 00:16:51.915 "uuid": "fdf4e4be-6b41-4484-aa43-1a1a76cc48ef", 00:16:51.915 "is_configured": true, 00:16:51.915 "data_offset": 0, 00:16:51.915 "data_size": 65536 00:16:51.915 } 00:16:51.915 ] 00:16:51.915 }' 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.915 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.175 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:52.175 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:52.175 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.175 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.175 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.175 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.435 [2024-12-05 20:09:53.650795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:52.435 [2024-12-05 20:09:53.650905] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.435 [2024-12-05 20:09:53.741242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.435 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.435 [2024-12-05 20:09:53.801134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:52.435 [2024-12-05 20:09:53.801184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:52.694 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.695 BaseBdev2 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.695 20:09:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.695 [ 00:16:52.695 { 00:16:52.695 "name": "BaseBdev2", 00:16:52.695 "aliases": [ 00:16:52.695 "0a1bef67-c812-49d4-b2a8-fe21fdedb681" 00:16:52.695 ], 00:16:52.695 "product_name": "Malloc disk", 00:16:52.695 "block_size": 512, 00:16:52.695 "num_blocks": 65536, 00:16:52.695 "uuid": "0a1bef67-c812-49d4-b2a8-fe21fdedb681", 00:16:52.695 "assigned_rate_limits": { 00:16:52.695 "rw_ios_per_sec": 0, 00:16:52.695 "rw_mbytes_per_sec": 0, 00:16:52.695 "r_mbytes_per_sec": 0, 00:16:52.695 "w_mbytes_per_sec": 0 00:16:52.695 }, 00:16:52.695 "claimed": false, 00:16:52.695 "zoned": false, 00:16:52.695 "supported_io_types": { 00:16:52.695 "read": true, 00:16:52.695 "write": true, 00:16:52.695 "unmap": true, 00:16:52.695 "flush": true, 00:16:52.695 "reset": true, 00:16:52.695 "nvme_admin": false, 00:16:52.695 "nvme_io": false, 00:16:52.695 "nvme_io_md": false, 00:16:52.695 "write_zeroes": true, 00:16:52.695 "zcopy": true, 00:16:52.695 "get_zone_info": false, 00:16:52.695 "zone_management": false, 00:16:52.695 "zone_append": false, 00:16:52.695 "compare": false, 00:16:52.695 "compare_and_write": false, 00:16:52.695 "abort": true, 00:16:52.695 "seek_hole": false, 00:16:52.695 "seek_data": false, 00:16:52.695 "copy": true, 00:16:52.695 "nvme_iov_md": false 00:16:52.695 }, 00:16:52.695 "memory_domains": [ 00:16:52.695 { 00:16:52.695 "dma_device_id": "system", 00:16:52.695 "dma_device_type": 1 00:16:52.695 }, 00:16:52.695 { 00:16:52.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.695 "dma_device_type": 2 00:16:52.695 } 00:16:52.695 ], 00:16:52.695 "driver_specific": {} 00:16:52.695 } 00:16:52.695 ] 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.695 BaseBdev3 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.695 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.695 [ 00:16:52.695 { 00:16:52.695 "name": "BaseBdev3", 00:16:52.695 "aliases": [ 00:16:52.695 "8ffef185-3dc5-45d5-9f71-994d902f6e2f" 00:16:52.695 ], 00:16:52.695 "product_name": "Malloc disk", 00:16:52.695 "block_size": 512, 00:16:52.695 "num_blocks": 65536, 00:16:52.695 "uuid": "8ffef185-3dc5-45d5-9f71-994d902f6e2f", 00:16:52.695 "assigned_rate_limits": { 00:16:52.695 "rw_ios_per_sec": 0, 00:16:52.695 "rw_mbytes_per_sec": 0, 00:16:52.695 "r_mbytes_per_sec": 0, 00:16:52.695 "w_mbytes_per_sec": 0 00:16:52.695 }, 00:16:52.695 "claimed": false, 00:16:52.695 "zoned": false, 00:16:52.695 "supported_io_types": { 00:16:52.695 "read": true, 00:16:52.695 "write": true, 00:16:52.695 "unmap": true, 00:16:52.695 "flush": true, 00:16:52.695 "reset": true, 00:16:52.696 "nvme_admin": false, 00:16:52.696 "nvme_io": false, 00:16:52.696 "nvme_io_md": false, 00:16:52.696 "write_zeroes": true, 00:16:52.696 "zcopy": true, 00:16:52.696 "get_zone_info": false, 00:16:52.696 "zone_management": false, 00:16:52.696 "zone_append": false, 00:16:52.696 "compare": false, 00:16:52.696 "compare_and_write": false, 00:16:52.696 "abort": true, 00:16:52.696 "seek_hole": false, 00:16:52.696 "seek_data": false, 00:16:52.696 "copy": true, 00:16:52.696 "nvme_iov_md": false 00:16:52.696 }, 00:16:52.696 "memory_domains": [ 00:16:52.696 { 00:16:52.696 "dma_device_id": "system", 00:16:52.696 "dma_device_type": 1 00:16:52.696 }, 00:16:52.696 { 00:16:52.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.696 "dma_device_type": 2 00:16:52.696 } 00:16:52.696 ], 00:16:52.696 "driver_specific": {} 00:16:52.696 } 00:16:52.696 ] 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.696 [2024-12-05 20:09:54.104582] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.696 [2024-12-05 20:09:54.104637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.696 [2024-12-05 20:09:54.104657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.696 [2024-12-05 20:09:54.106425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.696 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.956 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.956 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.956 "name": "Existed_Raid", 00:16:52.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.956 "strip_size_kb": 64, 00:16:52.956 "state": "configuring", 00:16:52.956 "raid_level": "raid5f", 00:16:52.956 "superblock": false, 00:16:52.956 "num_base_bdevs": 3, 00:16:52.956 "num_base_bdevs_discovered": 2, 00:16:52.956 "num_base_bdevs_operational": 3, 00:16:52.956 "base_bdevs_list": [ 00:16:52.956 { 00:16:52.956 "name": "BaseBdev1", 00:16:52.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.956 "is_configured": false, 00:16:52.956 "data_offset": 0, 00:16:52.956 "data_size": 0 00:16:52.956 }, 00:16:52.956 { 00:16:52.956 "name": "BaseBdev2", 00:16:52.956 "uuid": "0a1bef67-c812-49d4-b2a8-fe21fdedb681", 00:16:52.956 "is_configured": true, 00:16:52.956 "data_offset": 0, 00:16:52.956 "data_size": 65536 00:16:52.956 }, 00:16:52.956 { 00:16:52.956 "name": "BaseBdev3", 00:16:52.956 "uuid": "8ffef185-3dc5-45d5-9f71-994d902f6e2f", 00:16:52.956 "is_configured": true, 00:16:52.956 "data_offset": 0, 00:16:52.956 "data_size": 65536 00:16:52.956 } 00:16:52.956 ] 00:16:52.956 }' 00:16:52.956 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.956 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.216 [2024-12-05 20:09:54.571808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.216 "name": "Existed_Raid", 00:16:53.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.216 "strip_size_kb": 64, 00:16:53.216 "state": "configuring", 00:16:53.216 "raid_level": "raid5f", 00:16:53.216 "superblock": false, 00:16:53.216 "num_base_bdevs": 3, 00:16:53.216 "num_base_bdevs_discovered": 1, 00:16:53.216 "num_base_bdevs_operational": 3, 00:16:53.216 "base_bdevs_list": [ 00:16:53.216 { 00:16:53.216 "name": "BaseBdev1", 00:16:53.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.216 "is_configured": false, 00:16:53.216 "data_offset": 0, 00:16:53.216 "data_size": 0 00:16:53.216 }, 00:16:53.216 { 00:16:53.216 "name": null, 00:16:53.216 "uuid": "0a1bef67-c812-49d4-b2a8-fe21fdedb681", 00:16:53.216 "is_configured": false, 00:16:53.216 "data_offset": 0, 00:16:53.216 "data_size": 65536 00:16:53.216 }, 00:16:53.216 { 00:16:53.216 "name": "BaseBdev3", 00:16:53.216 "uuid": "8ffef185-3dc5-45d5-9f71-994d902f6e2f", 00:16:53.216 "is_configured": true, 00:16:53.216 "data_offset": 0, 00:16:53.216 "data_size": 65536 00:16:53.216 } 00:16:53.216 ] 00:16:53.216 }' 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.216 20:09:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.787 [2024-12-05 20:09:55.126357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.787 BaseBdev1 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.787 [ 00:16:53.787 { 00:16:53.787 "name": "BaseBdev1", 00:16:53.787 "aliases": [ 00:16:53.787 "a3e13e28-fd57-414c-8b2e-17d3d5c8b97b" 00:16:53.787 ], 00:16:53.787 "product_name": "Malloc disk", 00:16:53.787 "block_size": 512, 00:16:53.787 "num_blocks": 65536, 00:16:53.787 "uuid": "a3e13e28-fd57-414c-8b2e-17d3d5c8b97b", 00:16:53.787 "assigned_rate_limits": { 00:16:53.787 "rw_ios_per_sec": 0, 00:16:53.787 "rw_mbytes_per_sec": 0, 00:16:53.787 "r_mbytes_per_sec": 0, 00:16:53.787 "w_mbytes_per_sec": 0 00:16:53.787 }, 00:16:53.787 "claimed": true, 00:16:53.787 "claim_type": "exclusive_write", 00:16:53.787 "zoned": false, 00:16:53.787 "supported_io_types": { 00:16:53.787 "read": true, 00:16:53.787 "write": true, 00:16:53.787 "unmap": true, 00:16:53.787 "flush": true, 00:16:53.787 "reset": true, 00:16:53.787 "nvme_admin": false, 00:16:53.787 "nvme_io": false, 00:16:53.787 "nvme_io_md": false, 00:16:53.787 "write_zeroes": true, 00:16:53.787 "zcopy": true, 00:16:53.787 "get_zone_info": false, 00:16:53.787 "zone_management": false, 00:16:53.787 "zone_append": false, 00:16:53.787 "compare": false, 00:16:53.787 "compare_and_write": false, 00:16:53.787 "abort": true, 00:16:53.787 "seek_hole": false, 00:16:53.787 "seek_data": false, 00:16:53.787 "copy": true, 00:16:53.787 "nvme_iov_md": false 00:16:53.787 }, 00:16:53.787 "memory_domains": [ 00:16:53.787 { 00:16:53.787 "dma_device_id": "system", 00:16:53.787 "dma_device_type": 1 00:16:53.787 }, 00:16:53.787 { 00:16:53.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.787 "dma_device_type": 2 00:16:53.787 } 00:16:53.787 ], 00:16:53.787 "driver_specific": {} 00:16:53.787 } 00:16:53.787 ] 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.787 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.787 "name": "Existed_Raid", 00:16:53.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.787 "strip_size_kb": 64, 00:16:53.787 "state": "configuring", 00:16:53.787 "raid_level": "raid5f", 00:16:53.787 "superblock": false, 00:16:53.787 "num_base_bdevs": 3, 00:16:53.787 "num_base_bdevs_discovered": 2, 00:16:53.787 "num_base_bdevs_operational": 3, 00:16:53.787 "base_bdevs_list": [ 00:16:53.787 { 00:16:53.787 "name": "BaseBdev1", 00:16:53.788 "uuid": "a3e13e28-fd57-414c-8b2e-17d3d5c8b97b", 00:16:53.788 "is_configured": true, 00:16:53.788 "data_offset": 0, 00:16:53.788 "data_size": 65536 00:16:53.788 }, 00:16:53.788 { 00:16:53.788 "name": null, 00:16:53.788 "uuid": "0a1bef67-c812-49d4-b2a8-fe21fdedb681", 00:16:53.788 "is_configured": false, 00:16:53.788 "data_offset": 0, 00:16:53.788 "data_size": 65536 00:16:53.788 }, 00:16:53.788 { 00:16:53.788 "name": "BaseBdev3", 00:16:53.788 "uuid": "8ffef185-3dc5-45d5-9f71-994d902f6e2f", 00:16:53.788 "is_configured": true, 00:16:53.788 "data_offset": 0, 00:16:53.788 "data_size": 65536 00:16:53.788 } 00:16:53.788 ] 00:16:53.788 }' 00:16:53.788 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.788 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.356 [2024-12-05 20:09:55.641527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.356 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.356 "name": "Existed_Raid", 00:16:54.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.356 "strip_size_kb": 64, 00:16:54.356 "state": "configuring", 00:16:54.356 "raid_level": "raid5f", 00:16:54.356 "superblock": false, 00:16:54.356 "num_base_bdevs": 3, 00:16:54.356 "num_base_bdevs_discovered": 1, 00:16:54.356 "num_base_bdevs_operational": 3, 00:16:54.356 "base_bdevs_list": [ 00:16:54.356 { 00:16:54.356 "name": "BaseBdev1", 00:16:54.356 "uuid": "a3e13e28-fd57-414c-8b2e-17d3d5c8b97b", 00:16:54.356 "is_configured": true, 00:16:54.356 "data_offset": 0, 00:16:54.356 "data_size": 65536 00:16:54.356 }, 00:16:54.356 { 00:16:54.356 "name": null, 00:16:54.356 "uuid": "0a1bef67-c812-49d4-b2a8-fe21fdedb681", 00:16:54.357 "is_configured": false, 00:16:54.357 "data_offset": 0, 00:16:54.357 "data_size": 65536 00:16:54.357 }, 00:16:54.357 { 00:16:54.357 "name": null, 00:16:54.357 "uuid": "8ffef185-3dc5-45d5-9f71-994d902f6e2f", 00:16:54.357 "is_configured": false, 00:16:54.357 "data_offset": 0, 00:16:54.357 "data_size": 65536 00:16:54.357 } 00:16:54.357 ] 00:16:54.357 }' 00:16:54.357 20:09:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.357 20:09:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.925 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.925 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:54.925 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.925 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.925 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.925 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:54.925 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:54.925 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.926 [2024-12-05 20:09:56.116734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.926 "name": "Existed_Raid", 00:16:54.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.926 "strip_size_kb": 64, 00:16:54.926 "state": "configuring", 00:16:54.926 "raid_level": "raid5f", 00:16:54.926 "superblock": false, 00:16:54.926 "num_base_bdevs": 3, 00:16:54.926 "num_base_bdevs_discovered": 2, 00:16:54.926 "num_base_bdevs_operational": 3, 00:16:54.926 "base_bdevs_list": [ 00:16:54.926 { 00:16:54.926 "name": "BaseBdev1", 00:16:54.926 "uuid": "a3e13e28-fd57-414c-8b2e-17d3d5c8b97b", 00:16:54.926 "is_configured": true, 00:16:54.926 "data_offset": 0, 00:16:54.926 "data_size": 65536 00:16:54.926 }, 00:16:54.926 { 00:16:54.926 "name": null, 00:16:54.926 "uuid": "0a1bef67-c812-49d4-b2a8-fe21fdedb681", 00:16:54.926 "is_configured": false, 00:16:54.926 "data_offset": 0, 00:16:54.926 "data_size": 65536 00:16:54.926 }, 00:16:54.926 { 00:16:54.926 "name": "BaseBdev3", 00:16:54.926 "uuid": "8ffef185-3dc5-45d5-9f71-994d902f6e2f", 00:16:54.926 "is_configured": true, 00:16:54.926 "data_offset": 0, 00:16:54.926 "data_size": 65536 00:16:54.926 } 00:16:54.926 ] 00:16:54.926 }' 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.926 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.186 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.186 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.186 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.186 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:55.186 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.186 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:55.186 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:55.186 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.186 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.186 [2024-12-05 20:09:56.544020] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.446 "name": "Existed_Raid", 00:16:55.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.446 "strip_size_kb": 64, 00:16:55.446 "state": "configuring", 00:16:55.446 "raid_level": "raid5f", 00:16:55.446 "superblock": false, 00:16:55.446 "num_base_bdevs": 3, 00:16:55.446 "num_base_bdevs_discovered": 1, 00:16:55.446 "num_base_bdevs_operational": 3, 00:16:55.446 "base_bdevs_list": [ 00:16:55.446 { 00:16:55.446 "name": null, 00:16:55.446 "uuid": "a3e13e28-fd57-414c-8b2e-17d3d5c8b97b", 00:16:55.446 "is_configured": false, 00:16:55.446 "data_offset": 0, 00:16:55.446 "data_size": 65536 00:16:55.446 }, 00:16:55.446 { 00:16:55.446 "name": null, 00:16:55.446 "uuid": "0a1bef67-c812-49d4-b2a8-fe21fdedb681", 00:16:55.446 "is_configured": false, 00:16:55.446 "data_offset": 0, 00:16:55.446 "data_size": 65536 00:16:55.446 }, 00:16:55.446 { 00:16:55.446 "name": "BaseBdev3", 00:16:55.446 "uuid": "8ffef185-3dc5-45d5-9f71-994d902f6e2f", 00:16:55.446 "is_configured": true, 00:16:55.446 "data_offset": 0, 00:16:55.446 "data_size": 65536 00:16:55.446 } 00:16:55.446 ] 00:16:55.446 }' 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.446 20:09:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.705 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.705 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:55.705 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.705 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.705 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.705 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:55.705 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:55.706 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.706 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.706 [2024-12-05 20:09:57.122703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:55.706 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.706 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:55.706 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.706 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.706 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.706 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.706 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.706 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.706 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.706 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.706 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.706 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.706 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.706 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.706 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.965 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.965 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.965 "name": "Existed_Raid", 00:16:55.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.965 "strip_size_kb": 64, 00:16:55.965 "state": "configuring", 00:16:55.965 "raid_level": "raid5f", 00:16:55.965 "superblock": false, 00:16:55.965 "num_base_bdevs": 3, 00:16:55.965 "num_base_bdevs_discovered": 2, 00:16:55.965 "num_base_bdevs_operational": 3, 00:16:55.965 "base_bdevs_list": [ 00:16:55.965 { 00:16:55.965 "name": null, 00:16:55.965 "uuid": "a3e13e28-fd57-414c-8b2e-17d3d5c8b97b", 00:16:55.965 "is_configured": false, 00:16:55.965 "data_offset": 0, 00:16:55.965 "data_size": 65536 00:16:55.965 }, 00:16:55.965 { 00:16:55.965 "name": "BaseBdev2", 00:16:55.965 "uuid": "0a1bef67-c812-49d4-b2a8-fe21fdedb681", 00:16:55.965 "is_configured": true, 00:16:55.965 "data_offset": 0, 00:16:55.965 "data_size": 65536 00:16:55.965 }, 00:16:55.965 { 00:16:55.965 "name": "BaseBdev3", 00:16:55.965 "uuid": "8ffef185-3dc5-45d5-9f71-994d902f6e2f", 00:16:55.965 "is_configured": true, 00:16:55.965 "data_offset": 0, 00:16:55.965 "data_size": 65536 00:16:55.965 } 00:16:55.965 ] 00:16:55.965 }' 00:16:55.965 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.965 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.225 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.225 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.225 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:56.225 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.225 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.225 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:56.225 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.225 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.225 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.225 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:56.225 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.225 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a3e13e28-fd57-414c-8b2e-17d3d5c8b97b 00:16:56.225 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.225 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.484 [2024-12-05 20:09:57.678003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:56.484 [2024-12-05 20:09:57.678046] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:56.484 [2024-12-05 20:09:57.678055] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:56.484 [2024-12-05 20:09:57.678314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:56.484 [2024-12-05 20:09:57.683366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:56.484 [2024-12-05 20:09:57.683390] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:56.484 [2024-12-05 20:09:57.683646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.484 NewBaseBdev 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.485 [ 00:16:56.485 { 00:16:56.485 "name": "NewBaseBdev", 00:16:56.485 "aliases": [ 00:16:56.485 "a3e13e28-fd57-414c-8b2e-17d3d5c8b97b" 00:16:56.485 ], 00:16:56.485 "product_name": "Malloc disk", 00:16:56.485 "block_size": 512, 00:16:56.485 "num_blocks": 65536, 00:16:56.485 "uuid": "a3e13e28-fd57-414c-8b2e-17d3d5c8b97b", 00:16:56.485 "assigned_rate_limits": { 00:16:56.485 "rw_ios_per_sec": 0, 00:16:56.485 "rw_mbytes_per_sec": 0, 00:16:56.485 "r_mbytes_per_sec": 0, 00:16:56.485 "w_mbytes_per_sec": 0 00:16:56.485 }, 00:16:56.485 "claimed": true, 00:16:56.485 "claim_type": "exclusive_write", 00:16:56.485 "zoned": false, 00:16:56.485 "supported_io_types": { 00:16:56.485 "read": true, 00:16:56.485 "write": true, 00:16:56.485 "unmap": true, 00:16:56.485 "flush": true, 00:16:56.485 "reset": true, 00:16:56.485 "nvme_admin": false, 00:16:56.485 "nvme_io": false, 00:16:56.485 "nvme_io_md": false, 00:16:56.485 "write_zeroes": true, 00:16:56.485 "zcopy": true, 00:16:56.485 "get_zone_info": false, 00:16:56.485 "zone_management": false, 00:16:56.485 "zone_append": false, 00:16:56.485 "compare": false, 00:16:56.485 "compare_and_write": false, 00:16:56.485 "abort": true, 00:16:56.485 "seek_hole": false, 00:16:56.485 "seek_data": false, 00:16:56.485 "copy": true, 00:16:56.485 "nvme_iov_md": false 00:16:56.485 }, 00:16:56.485 "memory_domains": [ 00:16:56.485 { 00:16:56.485 "dma_device_id": "system", 00:16:56.485 "dma_device_type": 1 00:16:56.485 }, 00:16:56.485 { 00:16:56.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.485 "dma_device_type": 2 00:16:56.485 } 00:16:56.485 ], 00:16:56.485 "driver_specific": {} 00:16:56.485 } 00:16:56.485 ] 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.485 "name": "Existed_Raid", 00:16:56.485 "uuid": "e2568374-f09e-4866-8833-68e1873debea", 00:16:56.485 "strip_size_kb": 64, 00:16:56.485 "state": "online", 00:16:56.485 "raid_level": "raid5f", 00:16:56.485 "superblock": false, 00:16:56.485 "num_base_bdevs": 3, 00:16:56.485 "num_base_bdevs_discovered": 3, 00:16:56.485 "num_base_bdevs_operational": 3, 00:16:56.485 "base_bdevs_list": [ 00:16:56.485 { 00:16:56.485 "name": "NewBaseBdev", 00:16:56.485 "uuid": "a3e13e28-fd57-414c-8b2e-17d3d5c8b97b", 00:16:56.485 "is_configured": true, 00:16:56.485 "data_offset": 0, 00:16:56.485 "data_size": 65536 00:16:56.485 }, 00:16:56.485 { 00:16:56.485 "name": "BaseBdev2", 00:16:56.485 "uuid": "0a1bef67-c812-49d4-b2a8-fe21fdedb681", 00:16:56.485 "is_configured": true, 00:16:56.485 "data_offset": 0, 00:16:56.485 "data_size": 65536 00:16:56.485 }, 00:16:56.485 { 00:16:56.485 "name": "BaseBdev3", 00:16:56.485 "uuid": "8ffef185-3dc5-45d5-9f71-994d902f6e2f", 00:16:56.485 "is_configured": true, 00:16:56.485 "data_offset": 0, 00:16:56.485 "data_size": 65536 00:16:56.485 } 00:16:56.485 ] 00:16:56.485 }' 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.485 20:09:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.745 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:56.745 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:56.745 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:56.745 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:56.745 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:56.745 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:56.745 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:56.745 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.745 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.745 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:56.745 [2024-12-05 20:09:58.157213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.745 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:57.006 "name": "Existed_Raid", 00:16:57.006 "aliases": [ 00:16:57.006 "e2568374-f09e-4866-8833-68e1873debea" 00:16:57.006 ], 00:16:57.006 "product_name": "Raid Volume", 00:16:57.006 "block_size": 512, 00:16:57.006 "num_blocks": 131072, 00:16:57.006 "uuid": "e2568374-f09e-4866-8833-68e1873debea", 00:16:57.006 "assigned_rate_limits": { 00:16:57.006 "rw_ios_per_sec": 0, 00:16:57.006 "rw_mbytes_per_sec": 0, 00:16:57.006 "r_mbytes_per_sec": 0, 00:16:57.006 "w_mbytes_per_sec": 0 00:16:57.006 }, 00:16:57.006 "claimed": false, 00:16:57.006 "zoned": false, 00:16:57.006 "supported_io_types": { 00:16:57.006 "read": true, 00:16:57.006 "write": true, 00:16:57.006 "unmap": false, 00:16:57.006 "flush": false, 00:16:57.006 "reset": true, 00:16:57.006 "nvme_admin": false, 00:16:57.006 "nvme_io": false, 00:16:57.006 "nvme_io_md": false, 00:16:57.006 "write_zeroes": true, 00:16:57.006 "zcopy": false, 00:16:57.006 "get_zone_info": false, 00:16:57.006 "zone_management": false, 00:16:57.006 "zone_append": false, 00:16:57.006 "compare": false, 00:16:57.006 "compare_and_write": false, 00:16:57.006 "abort": false, 00:16:57.006 "seek_hole": false, 00:16:57.006 "seek_data": false, 00:16:57.006 "copy": false, 00:16:57.006 "nvme_iov_md": false 00:16:57.006 }, 00:16:57.006 "driver_specific": { 00:16:57.006 "raid": { 00:16:57.006 "uuid": "e2568374-f09e-4866-8833-68e1873debea", 00:16:57.006 "strip_size_kb": 64, 00:16:57.006 "state": "online", 00:16:57.006 "raid_level": "raid5f", 00:16:57.006 "superblock": false, 00:16:57.006 "num_base_bdevs": 3, 00:16:57.006 "num_base_bdevs_discovered": 3, 00:16:57.006 "num_base_bdevs_operational": 3, 00:16:57.006 "base_bdevs_list": [ 00:16:57.006 { 00:16:57.006 "name": "NewBaseBdev", 00:16:57.006 "uuid": "a3e13e28-fd57-414c-8b2e-17d3d5c8b97b", 00:16:57.006 "is_configured": true, 00:16:57.006 "data_offset": 0, 00:16:57.006 "data_size": 65536 00:16:57.006 }, 00:16:57.006 { 00:16:57.006 "name": "BaseBdev2", 00:16:57.006 "uuid": "0a1bef67-c812-49d4-b2a8-fe21fdedb681", 00:16:57.006 "is_configured": true, 00:16:57.006 "data_offset": 0, 00:16:57.006 "data_size": 65536 00:16:57.006 }, 00:16:57.006 { 00:16:57.006 "name": "BaseBdev3", 00:16:57.006 "uuid": "8ffef185-3dc5-45d5-9f71-994d902f6e2f", 00:16:57.006 "is_configured": true, 00:16:57.006 "data_offset": 0, 00:16:57.006 "data_size": 65536 00:16:57.006 } 00:16:57.006 ] 00:16:57.006 } 00:16:57.006 } 00:16:57.006 }' 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:57.006 BaseBdev2 00:16:57.006 BaseBdev3' 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.006 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.266 [2024-12-05 20:09:58.444507] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:57.266 [2024-12-05 20:09:58.444535] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.266 [2024-12-05 20:09:58.444624] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.266 [2024-12-05 20:09:58.444938] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.266 [2024-12-05 20:09:58.444963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:57.266 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.266 20:09:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79976 00:16:57.266 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79976 ']' 00:16:57.266 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79976 00:16:57.266 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:57.266 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.266 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79976 00:16:57.266 killing process with pid 79976 00:16:57.266 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.266 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.266 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79976' 00:16:57.266 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79976 00:16:57.266 [2024-12-05 20:09:58.485474] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:57.266 20:09:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79976 00:16:57.526 [2024-12-05 20:09:58.767379] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:58.466 20:09:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:58.466 00:16:58.466 real 0m10.505s 00:16:58.466 user 0m16.777s 00:16:58.466 sys 0m1.856s 00:16:58.466 20:09:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.466 20:09:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.466 ************************************ 00:16:58.466 END TEST raid5f_state_function_test 00:16:58.466 ************************************ 00:16:58.466 20:09:59 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:16:58.466 20:09:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:58.466 20:09:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.466 20:09:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:58.726 ************************************ 00:16:58.726 START TEST raid5f_state_function_test_sb 00:16:58.726 ************************************ 00:16:58.726 20:09:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:16:58.726 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:58.726 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:58.726 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:58.726 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:58.726 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:58.726 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:58.726 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:58.726 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:58.726 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:58.726 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:58.726 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:58.726 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80597 00:16:58.727 Process raid pid: 80597 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80597' 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80597 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80597 ']' 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.727 20:09:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.727 [2024-12-05 20:10:00.019647] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:16:58.727 [2024-12-05 20:10:00.019778] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.986 [2024-12-05 20:10:00.202749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.987 [2024-12-05 20:10:00.313497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.246 [2024-12-05 20:10:00.505009] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:59.246 [2024-12-05 20:10:00.505046] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:59.505 20:10:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.506 [2024-12-05 20:10:00.833600] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:59.506 [2024-12-05 20:10:00.833650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:59.506 [2024-12-05 20:10:00.833661] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:59.506 [2024-12-05 20:10:00.833670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:59.506 [2024-12-05 20:10:00.833682] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:59.506 [2024-12-05 20:10:00.833690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.506 "name": "Existed_Raid", 00:16:59.506 "uuid": "e15ff0e8-b974-41e4-9c42-5ed313e853ef", 00:16:59.506 "strip_size_kb": 64, 00:16:59.506 "state": "configuring", 00:16:59.506 "raid_level": "raid5f", 00:16:59.506 "superblock": true, 00:16:59.506 "num_base_bdevs": 3, 00:16:59.506 "num_base_bdevs_discovered": 0, 00:16:59.506 "num_base_bdevs_operational": 3, 00:16:59.506 "base_bdevs_list": [ 00:16:59.506 { 00:16:59.506 "name": "BaseBdev1", 00:16:59.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.506 "is_configured": false, 00:16:59.506 "data_offset": 0, 00:16:59.506 "data_size": 0 00:16:59.506 }, 00:16:59.506 { 00:16:59.506 "name": "BaseBdev2", 00:16:59.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.506 "is_configured": false, 00:16:59.506 "data_offset": 0, 00:16:59.506 "data_size": 0 00:16:59.506 }, 00:16:59.506 { 00:16:59.506 "name": "BaseBdev3", 00:16:59.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.506 "is_configured": false, 00:16:59.506 "data_offset": 0, 00:16:59.506 "data_size": 0 00:16:59.506 } 00:16:59.506 ] 00:16:59.506 }' 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.506 20:10:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.075 [2024-12-05 20:10:01.284759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:00.075 [2024-12-05 20:10:01.284799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.075 [2024-12-05 20:10:01.296747] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:00.075 [2024-12-05 20:10:01.296784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:00.075 [2024-12-05 20:10:01.296808] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:00.075 [2024-12-05 20:10:01.296817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:00.075 [2024-12-05 20:10:01.296824] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:00.075 [2024-12-05 20:10:01.296833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.075 [2024-12-05 20:10:01.343479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:00.075 BaseBdev1 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.075 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.075 [ 00:17:00.075 { 00:17:00.075 "name": "BaseBdev1", 00:17:00.075 "aliases": [ 00:17:00.075 "22c40db7-b729-4a8d-8ded-3e4293d8920a" 00:17:00.075 ], 00:17:00.075 "product_name": "Malloc disk", 00:17:00.075 "block_size": 512, 00:17:00.075 "num_blocks": 65536, 00:17:00.075 "uuid": "22c40db7-b729-4a8d-8ded-3e4293d8920a", 00:17:00.075 "assigned_rate_limits": { 00:17:00.075 "rw_ios_per_sec": 0, 00:17:00.075 "rw_mbytes_per_sec": 0, 00:17:00.075 "r_mbytes_per_sec": 0, 00:17:00.075 "w_mbytes_per_sec": 0 00:17:00.075 }, 00:17:00.075 "claimed": true, 00:17:00.075 "claim_type": "exclusive_write", 00:17:00.075 "zoned": false, 00:17:00.075 "supported_io_types": { 00:17:00.075 "read": true, 00:17:00.075 "write": true, 00:17:00.075 "unmap": true, 00:17:00.075 "flush": true, 00:17:00.075 "reset": true, 00:17:00.075 "nvme_admin": false, 00:17:00.075 "nvme_io": false, 00:17:00.075 "nvme_io_md": false, 00:17:00.075 "write_zeroes": true, 00:17:00.075 "zcopy": true, 00:17:00.075 "get_zone_info": false, 00:17:00.075 "zone_management": false, 00:17:00.075 "zone_append": false, 00:17:00.075 "compare": false, 00:17:00.076 "compare_and_write": false, 00:17:00.076 "abort": true, 00:17:00.076 "seek_hole": false, 00:17:00.076 "seek_data": false, 00:17:00.076 "copy": true, 00:17:00.076 "nvme_iov_md": false 00:17:00.076 }, 00:17:00.076 "memory_domains": [ 00:17:00.076 { 00:17:00.076 "dma_device_id": "system", 00:17:00.076 "dma_device_type": 1 00:17:00.076 }, 00:17:00.076 { 00:17:00.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.076 "dma_device_type": 2 00:17:00.076 } 00:17:00.076 ], 00:17:00.076 "driver_specific": {} 00:17:00.076 } 00:17:00.076 ] 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.076 "name": "Existed_Raid", 00:17:00.076 "uuid": "9acbb42f-cb9f-49aa-8db9-41143231a788", 00:17:00.076 "strip_size_kb": 64, 00:17:00.076 "state": "configuring", 00:17:00.076 "raid_level": "raid5f", 00:17:00.076 "superblock": true, 00:17:00.076 "num_base_bdevs": 3, 00:17:00.076 "num_base_bdevs_discovered": 1, 00:17:00.076 "num_base_bdevs_operational": 3, 00:17:00.076 "base_bdevs_list": [ 00:17:00.076 { 00:17:00.076 "name": "BaseBdev1", 00:17:00.076 "uuid": "22c40db7-b729-4a8d-8ded-3e4293d8920a", 00:17:00.076 "is_configured": true, 00:17:00.076 "data_offset": 2048, 00:17:00.076 "data_size": 63488 00:17:00.076 }, 00:17:00.076 { 00:17:00.076 "name": "BaseBdev2", 00:17:00.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.076 "is_configured": false, 00:17:00.076 "data_offset": 0, 00:17:00.076 "data_size": 0 00:17:00.076 }, 00:17:00.076 { 00:17:00.076 "name": "BaseBdev3", 00:17:00.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.076 "is_configured": false, 00:17:00.076 "data_offset": 0, 00:17:00.076 "data_size": 0 00:17:00.076 } 00:17:00.076 ] 00:17:00.076 }' 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.076 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.642 [2024-12-05 20:10:01.830700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:00.642 [2024-12-05 20:10:01.830752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.642 [2024-12-05 20:10:01.842735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:00.642 [2024-12-05 20:10:01.844521] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:00.642 [2024-12-05 20:10:01.844557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:00.642 [2024-12-05 20:10:01.844567] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:00.642 [2024-12-05 20:10:01.844581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.642 "name": "Existed_Raid", 00:17:00.642 "uuid": "ef4a4601-3d79-42b5-a241-cfc7f6af3f5e", 00:17:00.642 "strip_size_kb": 64, 00:17:00.642 "state": "configuring", 00:17:00.642 "raid_level": "raid5f", 00:17:00.642 "superblock": true, 00:17:00.642 "num_base_bdevs": 3, 00:17:00.642 "num_base_bdevs_discovered": 1, 00:17:00.642 "num_base_bdevs_operational": 3, 00:17:00.642 "base_bdevs_list": [ 00:17:00.642 { 00:17:00.642 "name": "BaseBdev1", 00:17:00.642 "uuid": "22c40db7-b729-4a8d-8ded-3e4293d8920a", 00:17:00.642 "is_configured": true, 00:17:00.642 "data_offset": 2048, 00:17:00.642 "data_size": 63488 00:17:00.642 }, 00:17:00.642 { 00:17:00.642 "name": "BaseBdev2", 00:17:00.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.642 "is_configured": false, 00:17:00.642 "data_offset": 0, 00:17:00.642 "data_size": 0 00:17:00.642 }, 00:17:00.642 { 00:17:00.642 "name": "BaseBdev3", 00:17:00.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.642 "is_configured": false, 00:17:00.642 "data_offset": 0, 00:17:00.642 "data_size": 0 00:17:00.642 } 00:17:00.642 ] 00:17:00.642 }' 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.642 20:10:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.211 [2024-12-05 20:10:02.377591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:01.211 BaseBdev2 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.211 [ 00:17:01.211 { 00:17:01.211 "name": "BaseBdev2", 00:17:01.211 "aliases": [ 00:17:01.211 "83fcc1b8-dd56-4825-ba42-d1e3748cb60e" 00:17:01.211 ], 00:17:01.211 "product_name": "Malloc disk", 00:17:01.211 "block_size": 512, 00:17:01.211 "num_blocks": 65536, 00:17:01.211 "uuid": "83fcc1b8-dd56-4825-ba42-d1e3748cb60e", 00:17:01.211 "assigned_rate_limits": { 00:17:01.211 "rw_ios_per_sec": 0, 00:17:01.211 "rw_mbytes_per_sec": 0, 00:17:01.211 "r_mbytes_per_sec": 0, 00:17:01.211 "w_mbytes_per_sec": 0 00:17:01.211 }, 00:17:01.211 "claimed": true, 00:17:01.211 "claim_type": "exclusive_write", 00:17:01.211 "zoned": false, 00:17:01.211 "supported_io_types": { 00:17:01.211 "read": true, 00:17:01.211 "write": true, 00:17:01.211 "unmap": true, 00:17:01.211 "flush": true, 00:17:01.211 "reset": true, 00:17:01.211 "nvme_admin": false, 00:17:01.211 "nvme_io": false, 00:17:01.211 "nvme_io_md": false, 00:17:01.211 "write_zeroes": true, 00:17:01.211 "zcopy": true, 00:17:01.211 "get_zone_info": false, 00:17:01.211 "zone_management": false, 00:17:01.211 "zone_append": false, 00:17:01.211 "compare": false, 00:17:01.211 "compare_and_write": false, 00:17:01.211 "abort": true, 00:17:01.211 "seek_hole": false, 00:17:01.211 "seek_data": false, 00:17:01.211 "copy": true, 00:17:01.211 "nvme_iov_md": false 00:17:01.211 }, 00:17:01.211 "memory_domains": [ 00:17:01.211 { 00:17:01.211 "dma_device_id": "system", 00:17:01.211 "dma_device_type": 1 00:17:01.211 }, 00:17:01.211 { 00:17:01.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.211 "dma_device_type": 2 00:17:01.211 } 00:17:01.211 ], 00:17:01.211 "driver_specific": {} 00:17:01.211 } 00:17:01.211 ] 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.211 "name": "Existed_Raid", 00:17:01.211 "uuid": "ef4a4601-3d79-42b5-a241-cfc7f6af3f5e", 00:17:01.211 "strip_size_kb": 64, 00:17:01.211 "state": "configuring", 00:17:01.211 "raid_level": "raid5f", 00:17:01.211 "superblock": true, 00:17:01.211 "num_base_bdevs": 3, 00:17:01.211 "num_base_bdevs_discovered": 2, 00:17:01.211 "num_base_bdevs_operational": 3, 00:17:01.211 "base_bdevs_list": [ 00:17:01.211 { 00:17:01.211 "name": "BaseBdev1", 00:17:01.211 "uuid": "22c40db7-b729-4a8d-8ded-3e4293d8920a", 00:17:01.211 "is_configured": true, 00:17:01.211 "data_offset": 2048, 00:17:01.211 "data_size": 63488 00:17:01.211 }, 00:17:01.211 { 00:17:01.211 "name": "BaseBdev2", 00:17:01.211 "uuid": "83fcc1b8-dd56-4825-ba42-d1e3748cb60e", 00:17:01.211 "is_configured": true, 00:17:01.211 "data_offset": 2048, 00:17:01.211 "data_size": 63488 00:17:01.211 }, 00:17:01.211 { 00:17:01.211 "name": "BaseBdev3", 00:17:01.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.211 "is_configured": false, 00:17:01.211 "data_offset": 0, 00:17:01.211 "data_size": 0 00:17:01.211 } 00:17:01.211 ] 00:17:01.211 }' 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.211 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.470 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:01.470 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.470 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.470 [2024-12-05 20:10:02.878875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:01.470 [2024-12-05 20:10:02.879154] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:01.470 [2024-12-05 20:10:02.879173] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:01.471 [2024-12-05 20:10:02.879453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:01.471 BaseBdev3 00:17:01.471 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.471 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:01.471 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:01.471 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:01.471 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:01.471 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:01.471 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:01.471 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:01.471 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.471 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.471 [2024-12-05 20:10:02.885112] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:01.471 [2024-12-05 20:10:02.885135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:01.471 [2024-12-05 20:10:02.885314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.471 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.471 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:01.471 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.471 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.729 [ 00:17:01.729 { 00:17:01.729 "name": "BaseBdev3", 00:17:01.729 "aliases": [ 00:17:01.729 "8df15bdb-0e9b-4bb3-9fb3-d4c8ce537ed9" 00:17:01.729 ], 00:17:01.729 "product_name": "Malloc disk", 00:17:01.729 "block_size": 512, 00:17:01.729 "num_blocks": 65536, 00:17:01.729 "uuid": "8df15bdb-0e9b-4bb3-9fb3-d4c8ce537ed9", 00:17:01.729 "assigned_rate_limits": { 00:17:01.729 "rw_ios_per_sec": 0, 00:17:01.729 "rw_mbytes_per_sec": 0, 00:17:01.729 "r_mbytes_per_sec": 0, 00:17:01.729 "w_mbytes_per_sec": 0 00:17:01.729 }, 00:17:01.729 "claimed": true, 00:17:01.729 "claim_type": "exclusive_write", 00:17:01.729 "zoned": false, 00:17:01.729 "supported_io_types": { 00:17:01.729 "read": true, 00:17:01.729 "write": true, 00:17:01.729 "unmap": true, 00:17:01.729 "flush": true, 00:17:01.729 "reset": true, 00:17:01.729 "nvme_admin": false, 00:17:01.729 "nvme_io": false, 00:17:01.729 "nvme_io_md": false, 00:17:01.729 "write_zeroes": true, 00:17:01.729 "zcopy": true, 00:17:01.729 "get_zone_info": false, 00:17:01.729 "zone_management": false, 00:17:01.729 "zone_append": false, 00:17:01.729 "compare": false, 00:17:01.729 "compare_and_write": false, 00:17:01.729 "abort": true, 00:17:01.729 "seek_hole": false, 00:17:01.729 "seek_data": false, 00:17:01.729 "copy": true, 00:17:01.729 "nvme_iov_md": false 00:17:01.729 }, 00:17:01.729 "memory_domains": [ 00:17:01.729 { 00:17:01.729 "dma_device_id": "system", 00:17:01.729 "dma_device_type": 1 00:17:01.729 }, 00:17:01.729 { 00:17:01.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.729 "dma_device_type": 2 00:17:01.729 } 00:17:01.729 ], 00:17:01.729 "driver_specific": {} 00:17:01.729 } 00:17:01.729 ] 00:17:01.729 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.729 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:01.729 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:01.729 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:01.729 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:01.729 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.730 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.730 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.730 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.730 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.730 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.730 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.730 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.730 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.730 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.730 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.730 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.730 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.730 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.730 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.730 "name": "Existed_Raid", 00:17:01.730 "uuid": "ef4a4601-3d79-42b5-a241-cfc7f6af3f5e", 00:17:01.730 "strip_size_kb": 64, 00:17:01.730 "state": "online", 00:17:01.730 "raid_level": "raid5f", 00:17:01.730 "superblock": true, 00:17:01.730 "num_base_bdevs": 3, 00:17:01.730 "num_base_bdevs_discovered": 3, 00:17:01.730 "num_base_bdevs_operational": 3, 00:17:01.730 "base_bdevs_list": [ 00:17:01.730 { 00:17:01.730 "name": "BaseBdev1", 00:17:01.730 "uuid": "22c40db7-b729-4a8d-8ded-3e4293d8920a", 00:17:01.730 "is_configured": true, 00:17:01.730 "data_offset": 2048, 00:17:01.730 "data_size": 63488 00:17:01.730 }, 00:17:01.730 { 00:17:01.730 "name": "BaseBdev2", 00:17:01.730 "uuid": "83fcc1b8-dd56-4825-ba42-d1e3748cb60e", 00:17:01.730 "is_configured": true, 00:17:01.730 "data_offset": 2048, 00:17:01.730 "data_size": 63488 00:17:01.730 }, 00:17:01.730 { 00:17:01.730 "name": "BaseBdev3", 00:17:01.730 "uuid": "8df15bdb-0e9b-4bb3-9fb3-d4c8ce537ed9", 00:17:01.730 "is_configured": true, 00:17:01.730 "data_offset": 2048, 00:17:01.730 "data_size": 63488 00:17:01.730 } 00:17:01.730 ] 00:17:01.730 }' 00:17:01.730 20:10:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.730 20:10:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.988 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:01.988 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:01.988 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:01.988 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:01.988 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:01.988 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:01.988 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:01.988 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.988 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.988 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:01.988 [2024-12-05 20:10:03.366783] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.988 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.988 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:01.988 "name": "Existed_Raid", 00:17:01.988 "aliases": [ 00:17:01.988 "ef4a4601-3d79-42b5-a241-cfc7f6af3f5e" 00:17:01.988 ], 00:17:01.988 "product_name": "Raid Volume", 00:17:01.988 "block_size": 512, 00:17:01.988 "num_blocks": 126976, 00:17:01.988 "uuid": "ef4a4601-3d79-42b5-a241-cfc7f6af3f5e", 00:17:01.988 "assigned_rate_limits": { 00:17:01.988 "rw_ios_per_sec": 0, 00:17:01.988 "rw_mbytes_per_sec": 0, 00:17:01.988 "r_mbytes_per_sec": 0, 00:17:01.988 "w_mbytes_per_sec": 0 00:17:01.988 }, 00:17:01.988 "claimed": false, 00:17:01.988 "zoned": false, 00:17:01.988 "supported_io_types": { 00:17:01.988 "read": true, 00:17:01.988 "write": true, 00:17:01.988 "unmap": false, 00:17:01.988 "flush": false, 00:17:01.988 "reset": true, 00:17:01.988 "nvme_admin": false, 00:17:01.988 "nvme_io": false, 00:17:01.988 "nvme_io_md": false, 00:17:01.988 "write_zeroes": true, 00:17:01.988 "zcopy": false, 00:17:01.988 "get_zone_info": false, 00:17:01.988 "zone_management": false, 00:17:01.988 "zone_append": false, 00:17:01.988 "compare": false, 00:17:01.988 "compare_and_write": false, 00:17:01.988 "abort": false, 00:17:01.988 "seek_hole": false, 00:17:01.988 "seek_data": false, 00:17:01.988 "copy": false, 00:17:01.988 "nvme_iov_md": false 00:17:01.988 }, 00:17:01.988 "driver_specific": { 00:17:01.988 "raid": { 00:17:01.988 "uuid": "ef4a4601-3d79-42b5-a241-cfc7f6af3f5e", 00:17:01.988 "strip_size_kb": 64, 00:17:01.988 "state": "online", 00:17:01.988 "raid_level": "raid5f", 00:17:01.988 "superblock": true, 00:17:01.988 "num_base_bdevs": 3, 00:17:01.988 "num_base_bdevs_discovered": 3, 00:17:01.988 "num_base_bdevs_operational": 3, 00:17:01.988 "base_bdevs_list": [ 00:17:01.988 { 00:17:01.988 "name": "BaseBdev1", 00:17:01.988 "uuid": "22c40db7-b729-4a8d-8ded-3e4293d8920a", 00:17:01.988 "is_configured": true, 00:17:01.988 "data_offset": 2048, 00:17:01.988 "data_size": 63488 00:17:01.988 }, 00:17:01.988 { 00:17:01.988 "name": "BaseBdev2", 00:17:01.988 "uuid": "83fcc1b8-dd56-4825-ba42-d1e3748cb60e", 00:17:01.988 "is_configured": true, 00:17:01.988 "data_offset": 2048, 00:17:01.988 "data_size": 63488 00:17:01.988 }, 00:17:01.988 { 00:17:01.988 "name": "BaseBdev3", 00:17:01.988 "uuid": "8df15bdb-0e9b-4bb3-9fb3-d4c8ce537ed9", 00:17:01.988 "is_configured": true, 00:17:01.988 "data_offset": 2048, 00:17:01.988 "data_size": 63488 00:17:01.988 } 00:17:01.988 ] 00:17:01.988 } 00:17:01.988 } 00:17:01.988 }' 00:17:01.988 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:02.247 BaseBdev2 00:17:02.247 BaseBdev3' 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.247 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.247 [2024-12-05 20:10:03.658092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.506 "name": "Existed_Raid", 00:17:02.506 "uuid": "ef4a4601-3d79-42b5-a241-cfc7f6af3f5e", 00:17:02.506 "strip_size_kb": 64, 00:17:02.506 "state": "online", 00:17:02.506 "raid_level": "raid5f", 00:17:02.506 "superblock": true, 00:17:02.506 "num_base_bdevs": 3, 00:17:02.506 "num_base_bdevs_discovered": 2, 00:17:02.506 "num_base_bdevs_operational": 2, 00:17:02.506 "base_bdevs_list": [ 00:17:02.506 { 00:17:02.506 "name": null, 00:17:02.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.506 "is_configured": false, 00:17:02.506 "data_offset": 0, 00:17:02.506 "data_size": 63488 00:17:02.506 }, 00:17:02.506 { 00:17:02.506 "name": "BaseBdev2", 00:17:02.506 "uuid": "83fcc1b8-dd56-4825-ba42-d1e3748cb60e", 00:17:02.506 "is_configured": true, 00:17:02.506 "data_offset": 2048, 00:17:02.506 "data_size": 63488 00:17:02.506 }, 00:17:02.506 { 00:17:02.506 "name": "BaseBdev3", 00:17:02.506 "uuid": "8df15bdb-0e9b-4bb3-9fb3-d4c8ce537ed9", 00:17:02.506 "is_configured": true, 00:17:02.506 "data_offset": 2048, 00:17:02.506 "data_size": 63488 00:17:02.506 } 00:17:02.506 ] 00:17:02.506 }' 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.506 20:10:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.765 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:02.765 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.025 [2024-12-05 20:10:04.257730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:03.025 [2024-12-05 20:10:04.257879] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:03.025 [2024-12-05 20:10:04.348776] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.025 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.025 [2024-12-05 20:10:04.404785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:03.025 [2024-12-05 20:10:04.404842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.286 BaseBdev2 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.286 [ 00:17:03.286 { 00:17:03.286 "name": "BaseBdev2", 00:17:03.286 "aliases": [ 00:17:03.286 "572c5266-5b95-453c-8a36-3b694476d2fe" 00:17:03.286 ], 00:17:03.286 "product_name": "Malloc disk", 00:17:03.286 "block_size": 512, 00:17:03.286 "num_blocks": 65536, 00:17:03.286 "uuid": "572c5266-5b95-453c-8a36-3b694476d2fe", 00:17:03.286 "assigned_rate_limits": { 00:17:03.286 "rw_ios_per_sec": 0, 00:17:03.286 "rw_mbytes_per_sec": 0, 00:17:03.286 "r_mbytes_per_sec": 0, 00:17:03.286 "w_mbytes_per_sec": 0 00:17:03.286 }, 00:17:03.286 "claimed": false, 00:17:03.286 "zoned": false, 00:17:03.286 "supported_io_types": { 00:17:03.286 "read": true, 00:17:03.286 "write": true, 00:17:03.286 "unmap": true, 00:17:03.286 "flush": true, 00:17:03.286 "reset": true, 00:17:03.286 "nvme_admin": false, 00:17:03.286 "nvme_io": false, 00:17:03.286 "nvme_io_md": false, 00:17:03.286 "write_zeroes": true, 00:17:03.286 "zcopy": true, 00:17:03.286 "get_zone_info": false, 00:17:03.286 "zone_management": false, 00:17:03.286 "zone_append": false, 00:17:03.286 "compare": false, 00:17:03.286 "compare_and_write": false, 00:17:03.286 "abort": true, 00:17:03.286 "seek_hole": false, 00:17:03.286 "seek_data": false, 00:17:03.286 "copy": true, 00:17:03.286 "nvme_iov_md": false 00:17:03.286 }, 00:17:03.286 "memory_domains": [ 00:17:03.286 { 00:17:03.286 "dma_device_id": "system", 00:17:03.286 "dma_device_type": 1 00:17:03.286 }, 00:17:03.286 { 00:17:03.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.286 "dma_device_type": 2 00:17:03.286 } 00:17:03.286 ], 00:17:03.286 "driver_specific": {} 00:17:03.286 } 00:17:03.286 ] 00:17:03.286 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.287 BaseBdev3 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.287 [ 00:17:03.287 { 00:17:03.287 "name": "BaseBdev3", 00:17:03.287 "aliases": [ 00:17:03.287 "452f8bd3-11c5-4021-b372-2026589d2538" 00:17:03.287 ], 00:17:03.287 "product_name": "Malloc disk", 00:17:03.287 "block_size": 512, 00:17:03.287 "num_blocks": 65536, 00:17:03.287 "uuid": "452f8bd3-11c5-4021-b372-2026589d2538", 00:17:03.287 "assigned_rate_limits": { 00:17:03.287 "rw_ios_per_sec": 0, 00:17:03.287 "rw_mbytes_per_sec": 0, 00:17:03.287 "r_mbytes_per_sec": 0, 00:17:03.287 "w_mbytes_per_sec": 0 00:17:03.287 }, 00:17:03.287 "claimed": false, 00:17:03.287 "zoned": false, 00:17:03.287 "supported_io_types": { 00:17:03.287 "read": true, 00:17:03.287 "write": true, 00:17:03.287 "unmap": true, 00:17:03.287 "flush": true, 00:17:03.287 "reset": true, 00:17:03.287 "nvme_admin": false, 00:17:03.287 "nvme_io": false, 00:17:03.287 "nvme_io_md": false, 00:17:03.287 "write_zeroes": true, 00:17:03.287 "zcopy": true, 00:17:03.287 "get_zone_info": false, 00:17:03.287 "zone_management": false, 00:17:03.287 "zone_append": false, 00:17:03.287 "compare": false, 00:17:03.287 "compare_and_write": false, 00:17:03.287 "abort": true, 00:17:03.287 "seek_hole": false, 00:17:03.287 "seek_data": false, 00:17:03.287 "copy": true, 00:17:03.287 "nvme_iov_md": false 00:17:03.287 }, 00:17:03.287 "memory_domains": [ 00:17:03.287 { 00:17:03.287 "dma_device_id": "system", 00:17:03.287 "dma_device_type": 1 00:17:03.287 }, 00:17:03.287 { 00:17:03.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.287 "dma_device_type": 2 00:17:03.287 } 00:17:03.287 ], 00:17:03.287 "driver_specific": {} 00:17:03.287 } 00:17:03.287 ] 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.287 [2024-12-05 20:10:04.696806] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:03.287 [2024-12-05 20:10:04.696845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:03.287 [2024-12-05 20:10:04.696866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.287 [2024-12-05 20:10:04.698659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.287 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.547 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.547 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.547 "name": "Existed_Raid", 00:17:03.547 "uuid": "4242108d-6c5d-485d-bb57-9076c4408646", 00:17:03.547 "strip_size_kb": 64, 00:17:03.547 "state": "configuring", 00:17:03.547 "raid_level": "raid5f", 00:17:03.547 "superblock": true, 00:17:03.547 "num_base_bdevs": 3, 00:17:03.547 "num_base_bdevs_discovered": 2, 00:17:03.547 "num_base_bdevs_operational": 3, 00:17:03.547 "base_bdevs_list": [ 00:17:03.547 { 00:17:03.547 "name": "BaseBdev1", 00:17:03.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.547 "is_configured": false, 00:17:03.547 "data_offset": 0, 00:17:03.547 "data_size": 0 00:17:03.547 }, 00:17:03.547 { 00:17:03.547 "name": "BaseBdev2", 00:17:03.547 "uuid": "572c5266-5b95-453c-8a36-3b694476d2fe", 00:17:03.547 "is_configured": true, 00:17:03.547 "data_offset": 2048, 00:17:03.547 "data_size": 63488 00:17:03.547 }, 00:17:03.547 { 00:17:03.547 "name": "BaseBdev3", 00:17:03.547 "uuid": "452f8bd3-11c5-4021-b372-2026589d2538", 00:17:03.547 "is_configured": true, 00:17:03.547 "data_offset": 2048, 00:17:03.547 "data_size": 63488 00:17:03.547 } 00:17:03.547 ] 00:17:03.547 }' 00:17:03.547 20:10:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.547 20:10:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.806 [2024-12-05 20:10:05.140066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.806 "name": "Existed_Raid", 00:17:03.806 "uuid": "4242108d-6c5d-485d-bb57-9076c4408646", 00:17:03.806 "strip_size_kb": 64, 00:17:03.806 "state": "configuring", 00:17:03.806 "raid_level": "raid5f", 00:17:03.806 "superblock": true, 00:17:03.806 "num_base_bdevs": 3, 00:17:03.806 "num_base_bdevs_discovered": 1, 00:17:03.806 "num_base_bdevs_operational": 3, 00:17:03.806 "base_bdevs_list": [ 00:17:03.806 { 00:17:03.806 "name": "BaseBdev1", 00:17:03.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.806 "is_configured": false, 00:17:03.806 "data_offset": 0, 00:17:03.806 "data_size": 0 00:17:03.806 }, 00:17:03.806 { 00:17:03.806 "name": null, 00:17:03.806 "uuid": "572c5266-5b95-453c-8a36-3b694476d2fe", 00:17:03.806 "is_configured": false, 00:17:03.806 "data_offset": 0, 00:17:03.806 "data_size": 63488 00:17:03.806 }, 00:17:03.806 { 00:17:03.806 "name": "BaseBdev3", 00:17:03.806 "uuid": "452f8bd3-11c5-4021-b372-2026589d2538", 00:17:03.806 "is_configured": true, 00:17:03.806 "data_offset": 2048, 00:17:03.806 "data_size": 63488 00:17:03.806 } 00:17:03.806 ] 00:17:03.806 }' 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.806 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.375 [2024-12-05 20:10:05.618239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:04.375 BaseBdev1 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.375 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.375 [ 00:17:04.375 { 00:17:04.375 "name": "BaseBdev1", 00:17:04.376 "aliases": [ 00:17:04.376 "8714d1fb-9893-4c55-8639-45d7f922040c" 00:17:04.376 ], 00:17:04.376 "product_name": "Malloc disk", 00:17:04.376 "block_size": 512, 00:17:04.376 "num_blocks": 65536, 00:17:04.376 "uuid": "8714d1fb-9893-4c55-8639-45d7f922040c", 00:17:04.376 "assigned_rate_limits": { 00:17:04.376 "rw_ios_per_sec": 0, 00:17:04.376 "rw_mbytes_per_sec": 0, 00:17:04.376 "r_mbytes_per_sec": 0, 00:17:04.376 "w_mbytes_per_sec": 0 00:17:04.376 }, 00:17:04.376 "claimed": true, 00:17:04.376 "claim_type": "exclusive_write", 00:17:04.376 "zoned": false, 00:17:04.376 "supported_io_types": { 00:17:04.376 "read": true, 00:17:04.376 "write": true, 00:17:04.376 "unmap": true, 00:17:04.376 "flush": true, 00:17:04.376 "reset": true, 00:17:04.376 "nvme_admin": false, 00:17:04.376 "nvme_io": false, 00:17:04.376 "nvme_io_md": false, 00:17:04.376 "write_zeroes": true, 00:17:04.376 "zcopy": true, 00:17:04.376 "get_zone_info": false, 00:17:04.376 "zone_management": false, 00:17:04.376 "zone_append": false, 00:17:04.376 "compare": false, 00:17:04.376 "compare_and_write": false, 00:17:04.376 "abort": true, 00:17:04.376 "seek_hole": false, 00:17:04.376 "seek_data": false, 00:17:04.376 "copy": true, 00:17:04.376 "nvme_iov_md": false 00:17:04.376 }, 00:17:04.376 "memory_domains": [ 00:17:04.376 { 00:17:04.376 "dma_device_id": "system", 00:17:04.376 "dma_device_type": 1 00:17:04.376 }, 00:17:04.376 { 00:17:04.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.376 "dma_device_type": 2 00:17:04.376 } 00:17:04.376 ], 00:17:04.376 "driver_specific": {} 00:17:04.376 } 00:17:04.376 ] 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.376 "name": "Existed_Raid", 00:17:04.376 "uuid": "4242108d-6c5d-485d-bb57-9076c4408646", 00:17:04.376 "strip_size_kb": 64, 00:17:04.376 "state": "configuring", 00:17:04.376 "raid_level": "raid5f", 00:17:04.376 "superblock": true, 00:17:04.376 "num_base_bdevs": 3, 00:17:04.376 "num_base_bdevs_discovered": 2, 00:17:04.376 "num_base_bdevs_operational": 3, 00:17:04.376 "base_bdevs_list": [ 00:17:04.376 { 00:17:04.376 "name": "BaseBdev1", 00:17:04.376 "uuid": "8714d1fb-9893-4c55-8639-45d7f922040c", 00:17:04.376 "is_configured": true, 00:17:04.376 "data_offset": 2048, 00:17:04.376 "data_size": 63488 00:17:04.376 }, 00:17:04.376 { 00:17:04.376 "name": null, 00:17:04.376 "uuid": "572c5266-5b95-453c-8a36-3b694476d2fe", 00:17:04.376 "is_configured": false, 00:17:04.376 "data_offset": 0, 00:17:04.376 "data_size": 63488 00:17:04.376 }, 00:17:04.376 { 00:17:04.376 "name": "BaseBdev3", 00:17:04.376 "uuid": "452f8bd3-11c5-4021-b372-2026589d2538", 00:17:04.376 "is_configured": true, 00:17:04.376 "data_offset": 2048, 00:17:04.376 "data_size": 63488 00:17:04.376 } 00:17:04.376 ] 00:17:04.376 }' 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.376 20:10:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.946 [2024-12-05 20:10:06.153353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.946 "name": "Existed_Raid", 00:17:04.946 "uuid": "4242108d-6c5d-485d-bb57-9076c4408646", 00:17:04.946 "strip_size_kb": 64, 00:17:04.946 "state": "configuring", 00:17:04.946 "raid_level": "raid5f", 00:17:04.946 "superblock": true, 00:17:04.946 "num_base_bdevs": 3, 00:17:04.946 "num_base_bdevs_discovered": 1, 00:17:04.946 "num_base_bdevs_operational": 3, 00:17:04.946 "base_bdevs_list": [ 00:17:04.946 { 00:17:04.946 "name": "BaseBdev1", 00:17:04.946 "uuid": "8714d1fb-9893-4c55-8639-45d7f922040c", 00:17:04.946 "is_configured": true, 00:17:04.946 "data_offset": 2048, 00:17:04.946 "data_size": 63488 00:17:04.946 }, 00:17:04.946 { 00:17:04.946 "name": null, 00:17:04.946 "uuid": "572c5266-5b95-453c-8a36-3b694476d2fe", 00:17:04.946 "is_configured": false, 00:17:04.946 "data_offset": 0, 00:17:04.946 "data_size": 63488 00:17:04.946 }, 00:17:04.946 { 00:17:04.946 "name": null, 00:17:04.946 "uuid": "452f8bd3-11c5-4021-b372-2026589d2538", 00:17:04.946 "is_configured": false, 00:17:04.946 "data_offset": 0, 00:17:04.946 "data_size": 63488 00:17:04.946 } 00:17:04.946 ] 00:17:04.946 }' 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.946 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.206 [2024-12-05 20:10:06.632549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.206 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.466 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.466 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.466 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.466 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.466 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.466 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.466 "name": "Existed_Raid", 00:17:05.466 "uuid": "4242108d-6c5d-485d-bb57-9076c4408646", 00:17:05.466 "strip_size_kb": 64, 00:17:05.466 "state": "configuring", 00:17:05.466 "raid_level": "raid5f", 00:17:05.466 "superblock": true, 00:17:05.466 "num_base_bdevs": 3, 00:17:05.466 "num_base_bdevs_discovered": 2, 00:17:05.466 "num_base_bdevs_operational": 3, 00:17:05.466 "base_bdevs_list": [ 00:17:05.466 { 00:17:05.466 "name": "BaseBdev1", 00:17:05.466 "uuid": "8714d1fb-9893-4c55-8639-45d7f922040c", 00:17:05.466 "is_configured": true, 00:17:05.466 "data_offset": 2048, 00:17:05.466 "data_size": 63488 00:17:05.466 }, 00:17:05.466 { 00:17:05.466 "name": null, 00:17:05.466 "uuid": "572c5266-5b95-453c-8a36-3b694476d2fe", 00:17:05.466 "is_configured": false, 00:17:05.466 "data_offset": 0, 00:17:05.466 "data_size": 63488 00:17:05.466 }, 00:17:05.466 { 00:17:05.466 "name": "BaseBdev3", 00:17:05.466 "uuid": "452f8bd3-11c5-4021-b372-2026589d2538", 00:17:05.466 "is_configured": true, 00:17:05.466 "data_offset": 2048, 00:17:05.466 "data_size": 63488 00:17:05.466 } 00:17:05.466 ] 00:17:05.466 }' 00:17:05.466 20:10:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.466 20:10:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.726 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.726 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:05.726 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.726 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.726 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.726 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:05.726 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:05.726 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.726 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.985 [2024-12-05 20:10:07.163670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.985 "name": "Existed_Raid", 00:17:05.985 "uuid": "4242108d-6c5d-485d-bb57-9076c4408646", 00:17:05.985 "strip_size_kb": 64, 00:17:05.985 "state": "configuring", 00:17:05.985 "raid_level": "raid5f", 00:17:05.985 "superblock": true, 00:17:05.985 "num_base_bdevs": 3, 00:17:05.985 "num_base_bdevs_discovered": 1, 00:17:05.985 "num_base_bdevs_operational": 3, 00:17:05.985 "base_bdevs_list": [ 00:17:05.985 { 00:17:05.985 "name": null, 00:17:05.985 "uuid": "8714d1fb-9893-4c55-8639-45d7f922040c", 00:17:05.985 "is_configured": false, 00:17:05.985 "data_offset": 0, 00:17:05.985 "data_size": 63488 00:17:05.985 }, 00:17:05.985 { 00:17:05.985 "name": null, 00:17:05.985 "uuid": "572c5266-5b95-453c-8a36-3b694476d2fe", 00:17:05.985 "is_configured": false, 00:17:05.985 "data_offset": 0, 00:17:05.985 "data_size": 63488 00:17:05.985 }, 00:17:05.985 { 00:17:05.985 "name": "BaseBdev3", 00:17:05.985 "uuid": "452f8bd3-11c5-4021-b372-2026589d2538", 00:17:05.985 "is_configured": true, 00:17:05.985 "data_offset": 2048, 00:17:05.985 "data_size": 63488 00:17:05.985 } 00:17:05.985 ] 00:17:05.985 }' 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.985 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.243 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.243 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.243 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.243 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:06.243 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.243 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:06.243 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:06.243 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.243 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.243 [2024-12-05 20:10:07.678154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.502 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.502 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:06.502 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.502 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.502 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.502 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.502 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.502 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.502 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.502 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.502 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.502 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.502 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.502 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.502 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.502 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.502 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.502 "name": "Existed_Raid", 00:17:06.502 "uuid": "4242108d-6c5d-485d-bb57-9076c4408646", 00:17:06.502 "strip_size_kb": 64, 00:17:06.502 "state": "configuring", 00:17:06.502 "raid_level": "raid5f", 00:17:06.503 "superblock": true, 00:17:06.503 "num_base_bdevs": 3, 00:17:06.503 "num_base_bdevs_discovered": 2, 00:17:06.503 "num_base_bdevs_operational": 3, 00:17:06.503 "base_bdevs_list": [ 00:17:06.503 { 00:17:06.503 "name": null, 00:17:06.503 "uuid": "8714d1fb-9893-4c55-8639-45d7f922040c", 00:17:06.503 "is_configured": false, 00:17:06.503 "data_offset": 0, 00:17:06.503 "data_size": 63488 00:17:06.503 }, 00:17:06.503 { 00:17:06.503 "name": "BaseBdev2", 00:17:06.503 "uuid": "572c5266-5b95-453c-8a36-3b694476d2fe", 00:17:06.503 "is_configured": true, 00:17:06.503 "data_offset": 2048, 00:17:06.503 "data_size": 63488 00:17:06.503 }, 00:17:06.503 { 00:17:06.503 "name": "BaseBdev3", 00:17:06.503 "uuid": "452f8bd3-11c5-4021-b372-2026589d2538", 00:17:06.503 "is_configured": true, 00:17:06.503 "data_offset": 2048, 00:17:06.503 "data_size": 63488 00:17:06.503 } 00:17:06.503 ] 00:17:06.503 }' 00:17:06.503 20:10:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.503 20:10:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8714d1fb-9893-4c55-8639-45d7f922040c 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.767 [2024-12-05 20:10:08.188364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:06.767 [2024-12-05 20:10:08.188569] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:06.767 [2024-12-05 20:10:08.188592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:06.767 [2024-12-05 20:10:08.188855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:06.767 NewBaseBdev 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.767 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.767 [2024-12-05 20:10:08.194303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:06.767 [2024-12-05 20:10:08.194328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:06.767 [2024-12-05 20:10:08.194497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.047 [ 00:17:07.047 { 00:17:07.047 "name": "NewBaseBdev", 00:17:07.047 "aliases": [ 00:17:07.047 "8714d1fb-9893-4c55-8639-45d7f922040c" 00:17:07.047 ], 00:17:07.047 "product_name": "Malloc disk", 00:17:07.047 "block_size": 512, 00:17:07.047 "num_blocks": 65536, 00:17:07.047 "uuid": "8714d1fb-9893-4c55-8639-45d7f922040c", 00:17:07.047 "assigned_rate_limits": { 00:17:07.047 "rw_ios_per_sec": 0, 00:17:07.047 "rw_mbytes_per_sec": 0, 00:17:07.047 "r_mbytes_per_sec": 0, 00:17:07.047 "w_mbytes_per_sec": 0 00:17:07.047 }, 00:17:07.047 "claimed": true, 00:17:07.047 "claim_type": "exclusive_write", 00:17:07.047 "zoned": false, 00:17:07.047 "supported_io_types": { 00:17:07.047 "read": true, 00:17:07.047 "write": true, 00:17:07.047 "unmap": true, 00:17:07.047 "flush": true, 00:17:07.047 "reset": true, 00:17:07.047 "nvme_admin": false, 00:17:07.047 "nvme_io": false, 00:17:07.047 "nvme_io_md": false, 00:17:07.047 "write_zeroes": true, 00:17:07.047 "zcopy": true, 00:17:07.047 "get_zone_info": false, 00:17:07.047 "zone_management": false, 00:17:07.047 "zone_append": false, 00:17:07.047 "compare": false, 00:17:07.047 "compare_and_write": false, 00:17:07.047 "abort": true, 00:17:07.047 "seek_hole": false, 00:17:07.047 "seek_data": false, 00:17:07.047 "copy": true, 00:17:07.047 "nvme_iov_md": false 00:17:07.047 }, 00:17:07.047 "memory_domains": [ 00:17:07.047 { 00:17:07.047 "dma_device_id": "system", 00:17:07.047 "dma_device_type": 1 00:17:07.047 }, 00:17:07.047 { 00:17:07.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.047 "dma_device_type": 2 00:17:07.047 } 00:17:07.047 ], 00:17:07.047 "driver_specific": {} 00:17:07.047 } 00:17:07.047 ] 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.047 "name": "Existed_Raid", 00:17:07.047 "uuid": "4242108d-6c5d-485d-bb57-9076c4408646", 00:17:07.047 "strip_size_kb": 64, 00:17:07.047 "state": "online", 00:17:07.047 "raid_level": "raid5f", 00:17:07.047 "superblock": true, 00:17:07.047 "num_base_bdevs": 3, 00:17:07.047 "num_base_bdevs_discovered": 3, 00:17:07.047 "num_base_bdevs_operational": 3, 00:17:07.047 "base_bdevs_list": [ 00:17:07.047 { 00:17:07.047 "name": "NewBaseBdev", 00:17:07.047 "uuid": "8714d1fb-9893-4c55-8639-45d7f922040c", 00:17:07.047 "is_configured": true, 00:17:07.047 "data_offset": 2048, 00:17:07.047 "data_size": 63488 00:17:07.047 }, 00:17:07.047 { 00:17:07.047 "name": "BaseBdev2", 00:17:07.047 "uuid": "572c5266-5b95-453c-8a36-3b694476d2fe", 00:17:07.047 "is_configured": true, 00:17:07.047 "data_offset": 2048, 00:17:07.047 "data_size": 63488 00:17:07.047 }, 00:17:07.047 { 00:17:07.047 "name": "BaseBdev3", 00:17:07.047 "uuid": "452f8bd3-11c5-4021-b372-2026589d2538", 00:17:07.047 "is_configured": true, 00:17:07.047 "data_offset": 2048, 00:17:07.047 "data_size": 63488 00:17:07.047 } 00:17:07.047 ] 00:17:07.047 }' 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.047 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.337 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:07.337 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:07.337 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:07.337 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:07.337 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:07.337 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:07.337 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:07.337 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:07.337 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.337 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.337 [2024-12-05 20:10:08.648253] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.337 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.337 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:07.337 "name": "Existed_Raid", 00:17:07.337 "aliases": [ 00:17:07.337 "4242108d-6c5d-485d-bb57-9076c4408646" 00:17:07.337 ], 00:17:07.337 "product_name": "Raid Volume", 00:17:07.337 "block_size": 512, 00:17:07.337 "num_blocks": 126976, 00:17:07.337 "uuid": "4242108d-6c5d-485d-bb57-9076c4408646", 00:17:07.337 "assigned_rate_limits": { 00:17:07.337 "rw_ios_per_sec": 0, 00:17:07.337 "rw_mbytes_per_sec": 0, 00:17:07.337 "r_mbytes_per_sec": 0, 00:17:07.337 "w_mbytes_per_sec": 0 00:17:07.337 }, 00:17:07.337 "claimed": false, 00:17:07.337 "zoned": false, 00:17:07.337 "supported_io_types": { 00:17:07.337 "read": true, 00:17:07.337 "write": true, 00:17:07.337 "unmap": false, 00:17:07.337 "flush": false, 00:17:07.337 "reset": true, 00:17:07.337 "nvme_admin": false, 00:17:07.337 "nvme_io": false, 00:17:07.337 "nvme_io_md": false, 00:17:07.337 "write_zeroes": true, 00:17:07.337 "zcopy": false, 00:17:07.337 "get_zone_info": false, 00:17:07.337 "zone_management": false, 00:17:07.337 "zone_append": false, 00:17:07.337 "compare": false, 00:17:07.337 "compare_and_write": false, 00:17:07.337 "abort": false, 00:17:07.337 "seek_hole": false, 00:17:07.337 "seek_data": false, 00:17:07.337 "copy": false, 00:17:07.337 "nvme_iov_md": false 00:17:07.337 }, 00:17:07.337 "driver_specific": { 00:17:07.337 "raid": { 00:17:07.337 "uuid": "4242108d-6c5d-485d-bb57-9076c4408646", 00:17:07.337 "strip_size_kb": 64, 00:17:07.337 "state": "online", 00:17:07.337 "raid_level": "raid5f", 00:17:07.337 "superblock": true, 00:17:07.337 "num_base_bdevs": 3, 00:17:07.337 "num_base_bdevs_discovered": 3, 00:17:07.337 "num_base_bdevs_operational": 3, 00:17:07.337 "base_bdevs_list": [ 00:17:07.337 { 00:17:07.337 "name": "NewBaseBdev", 00:17:07.337 "uuid": "8714d1fb-9893-4c55-8639-45d7f922040c", 00:17:07.337 "is_configured": true, 00:17:07.337 "data_offset": 2048, 00:17:07.337 "data_size": 63488 00:17:07.337 }, 00:17:07.337 { 00:17:07.337 "name": "BaseBdev2", 00:17:07.337 "uuid": "572c5266-5b95-453c-8a36-3b694476d2fe", 00:17:07.337 "is_configured": true, 00:17:07.337 "data_offset": 2048, 00:17:07.337 "data_size": 63488 00:17:07.337 }, 00:17:07.337 { 00:17:07.337 "name": "BaseBdev3", 00:17:07.337 "uuid": "452f8bd3-11c5-4021-b372-2026589d2538", 00:17:07.337 "is_configured": true, 00:17:07.337 "data_offset": 2048, 00:17:07.337 "data_size": 63488 00:17:07.337 } 00:17:07.337 ] 00:17:07.337 } 00:17:07.337 } 00:17:07.337 }' 00:17:07.337 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:07.337 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:07.337 BaseBdev2 00:17:07.337 BaseBdev3' 00:17:07.337 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.597 [2024-12-05 20:10:08.923570] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:07.597 [2024-12-05 20:10:08.923597] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.597 [2024-12-05 20:10:08.923669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.597 [2024-12-05 20:10:08.923970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.597 [2024-12-05 20:10:08.923992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80597 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80597 ']' 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80597 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80597 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:07.597 killing process with pid 80597 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80597' 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80597 00:17:07.597 [2024-12-05 20:10:08.966804] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:07.597 20:10:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80597 00:17:07.857 [2024-12-05 20:10:09.246030] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:09.236 20:10:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:09.236 00:17:09.236 real 0m10.405s 00:17:09.236 user 0m16.553s 00:17:09.236 sys 0m1.965s 00:17:09.236 20:10:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.236 20:10:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.236 ************************************ 00:17:09.236 END TEST raid5f_state_function_test_sb 00:17:09.236 ************************************ 00:17:09.236 20:10:10 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:17:09.236 20:10:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:09.236 20:10:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.236 20:10:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:09.236 ************************************ 00:17:09.236 START TEST raid5f_superblock_test 00:17:09.236 ************************************ 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81219 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81219 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81219 ']' 00:17:09.236 20:10:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.237 20:10:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.237 20:10:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.237 20:10:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.237 20:10:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.237 [2024-12-05 20:10:10.494329] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:17:09.237 [2024-12-05 20:10:10.494499] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81219 ] 00:17:09.496 [2024-12-05 20:10:10.675638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.496 [2024-12-05 20:10:10.778769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.755 [2024-12-05 20:10:10.955241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.755 [2024-12-05 20:10:10.955295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.017 malloc1 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.017 [2024-12-05 20:10:11.347793] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:10.017 [2024-12-05 20:10:11.347867] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.017 [2024-12-05 20:10:11.347889] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:10.017 [2024-12-05 20:10:11.347907] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.017 [2024-12-05 20:10:11.349943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.017 [2024-12-05 20:10:11.349972] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:10.017 pt1 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.017 malloc2 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.017 [2024-12-05 20:10:11.402832] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:10.017 [2024-12-05 20:10:11.402894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.017 [2024-12-05 20:10:11.402928] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:10.017 [2024-12-05 20:10:11.402936] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.017 [2024-12-05 20:10:11.404917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.017 [2024-12-05 20:10:11.404947] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:10.017 pt2 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.017 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.277 malloc3 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.277 [2024-12-05 20:10:11.493309] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:10.277 [2024-12-05 20:10:11.493354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.277 [2024-12-05 20:10:11.493374] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:10.277 [2024-12-05 20:10:11.493383] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.277 [2024-12-05 20:10:11.495390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.277 [2024-12-05 20:10:11.495421] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:10.277 pt3 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.277 [2024-12-05 20:10:11.505341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:10.277 [2024-12-05 20:10:11.507066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:10.277 [2024-12-05 20:10:11.507155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:10.277 [2024-12-05 20:10:11.507317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:10.277 [2024-12-05 20:10:11.507335] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:10.277 [2024-12-05 20:10:11.507567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:10.277 [2024-12-05 20:10:11.512644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:10.277 [2024-12-05 20:10:11.512665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:10.277 [2024-12-05 20:10:11.512858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.277 "name": "raid_bdev1", 00:17:10.277 "uuid": "7e1d0255-54e7-4450-9f04-36ffd8dc76d7", 00:17:10.277 "strip_size_kb": 64, 00:17:10.277 "state": "online", 00:17:10.277 "raid_level": "raid5f", 00:17:10.277 "superblock": true, 00:17:10.277 "num_base_bdevs": 3, 00:17:10.277 "num_base_bdevs_discovered": 3, 00:17:10.277 "num_base_bdevs_operational": 3, 00:17:10.277 "base_bdevs_list": [ 00:17:10.277 { 00:17:10.277 "name": "pt1", 00:17:10.277 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:10.277 "is_configured": true, 00:17:10.277 "data_offset": 2048, 00:17:10.277 "data_size": 63488 00:17:10.277 }, 00:17:10.277 { 00:17:10.277 "name": "pt2", 00:17:10.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:10.277 "is_configured": true, 00:17:10.277 "data_offset": 2048, 00:17:10.277 "data_size": 63488 00:17:10.277 }, 00:17:10.277 { 00:17:10.277 "name": "pt3", 00:17:10.277 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:10.277 "is_configured": true, 00:17:10.277 "data_offset": 2048, 00:17:10.277 "data_size": 63488 00:17:10.277 } 00:17:10.277 ] 00:17:10.277 }' 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.277 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.536 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:10.536 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:10.536 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:10.536 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:10.536 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:10.536 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:10.536 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:10.536 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:10.536 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.536 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.536 [2024-12-05 20:10:11.946598] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.536 20:10:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.795 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:10.795 "name": "raid_bdev1", 00:17:10.795 "aliases": [ 00:17:10.795 "7e1d0255-54e7-4450-9f04-36ffd8dc76d7" 00:17:10.795 ], 00:17:10.795 "product_name": "Raid Volume", 00:17:10.795 "block_size": 512, 00:17:10.795 "num_blocks": 126976, 00:17:10.796 "uuid": "7e1d0255-54e7-4450-9f04-36ffd8dc76d7", 00:17:10.796 "assigned_rate_limits": { 00:17:10.796 "rw_ios_per_sec": 0, 00:17:10.796 "rw_mbytes_per_sec": 0, 00:17:10.796 "r_mbytes_per_sec": 0, 00:17:10.796 "w_mbytes_per_sec": 0 00:17:10.796 }, 00:17:10.796 "claimed": false, 00:17:10.796 "zoned": false, 00:17:10.796 "supported_io_types": { 00:17:10.796 "read": true, 00:17:10.796 "write": true, 00:17:10.796 "unmap": false, 00:17:10.796 "flush": false, 00:17:10.796 "reset": true, 00:17:10.796 "nvme_admin": false, 00:17:10.796 "nvme_io": false, 00:17:10.796 "nvme_io_md": false, 00:17:10.796 "write_zeroes": true, 00:17:10.796 "zcopy": false, 00:17:10.796 "get_zone_info": false, 00:17:10.796 "zone_management": false, 00:17:10.796 "zone_append": false, 00:17:10.796 "compare": false, 00:17:10.796 "compare_and_write": false, 00:17:10.796 "abort": false, 00:17:10.796 "seek_hole": false, 00:17:10.796 "seek_data": false, 00:17:10.796 "copy": false, 00:17:10.796 "nvme_iov_md": false 00:17:10.796 }, 00:17:10.796 "driver_specific": { 00:17:10.796 "raid": { 00:17:10.796 "uuid": "7e1d0255-54e7-4450-9f04-36ffd8dc76d7", 00:17:10.796 "strip_size_kb": 64, 00:17:10.796 "state": "online", 00:17:10.796 "raid_level": "raid5f", 00:17:10.796 "superblock": true, 00:17:10.796 "num_base_bdevs": 3, 00:17:10.796 "num_base_bdevs_discovered": 3, 00:17:10.796 "num_base_bdevs_operational": 3, 00:17:10.796 "base_bdevs_list": [ 00:17:10.796 { 00:17:10.796 "name": "pt1", 00:17:10.796 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:10.796 "is_configured": true, 00:17:10.796 "data_offset": 2048, 00:17:10.796 "data_size": 63488 00:17:10.796 }, 00:17:10.796 { 00:17:10.796 "name": "pt2", 00:17:10.796 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:10.796 "is_configured": true, 00:17:10.796 "data_offset": 2048, 00:17:10.796 "data_size": 63488 00:17:10.796 }, 00:17:10.796 { 00:17:10.796 "name": "pt3", 00:17:10.796 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:10.796 "is_configured": true, 00:17:10.796 "data_offset": 2048, 00:17:10.796 "data_size": 63488 00:17:10.796 } 00:17:10.796 ] 00:17:10.796 } 00:17:10.796 } 00:17:10.796 }' 00:17:10.796 20:10:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:10.796 pt2 00:17:10.796 pt3' 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:10.796 [2024-12-05 20:10:12.206108] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.796 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7e1d0255-54e7-4450-9f04-36ffd8dc76d7 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7e1d0255-54e7-4450-9f04-36ffd8dc76d7 ']' 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.056 [2024-12-05 20:10:12.253874] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:11.056 [2024-12-05 20:10:12.253915] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:11.056 [2024-12-05 20:10:12.253992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.056 [2024-12-05 20:10:12.254069] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.056 [2024-12-05 20:10:12.254078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.056 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.056 [2024-12-05 20:10:12.401667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:11.056 [2024-12-05 20:10:12.403554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:11.056 [2024-12-05 20:10:12.403611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:11.056 [2024-12-05 20:10:12.403662] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:11.056 [2024-12-05 20:10:12.403705] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:11.056 [2024-12-05 20:10:12.403724] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:11.056 [2024-12-05 20:10:12.403740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:11.057 [2024-12-05 20:10:12.403750] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:11.057 request: 00:17:11.057 { 00:17:11.057 "name": "raid_bdev1", 00:17:11.057 "raid_level": "raid5f", 00:17:11.057 "base_bdevs": [ 00:17:11.057 "malloc1", 00:17:11.057 "malloc2", 00:17:11.057 "malloc3" 00:17:11.057 ], 00:17:11.057 "strip_size_kb": 64, 00:17:11.057 "superblock": false, 00:17:11.057 "method": "bdev_raid_create", 00:17:11.057 "req_id": 1 00:17:11.057 } 00:17:11.057 Got JSON-RPC error response 00:17:11.057 response: 00:17:11.057 { 00:17:11.057 "code": -17, 00:17:11.057 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:11.057 } 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.057 [2024-12-05 20:10:12.469492] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:11.057 [2024-12-05 20:10:12.469534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.057 [2024-12-05 20:10:12.469550] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:11.057 [2024-12-05 20:10:12.469559] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.057 [2024-12-05 20:10:12.471600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.057 [2024-12-05 20:10:12.471632] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:11.057 [2024-12-05 20:10:12.471696] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:11.057 [2024-12-05 20:10:12.471780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:11.057 pt1 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.057 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.317 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.317 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.317 "name": "raid_bdev1", 00:17:11.317 "uuid": "7e1d0255-54e7-4450-9f04-36ffd8dc76d7", 00:17:11.317 "strip_size_kb": 64, 00:17:11.317 "state": "configuring", 00:17:11.317 "raid_level": "raid5f", 00:17:11.317 "superblock": true, 00:17:11.317 "num_base_bdevs": 3, 00:17:11.317 "num_base_bdevs_discovered": 1, 00:17:11.317 "num_base_bdevs_operational": 3, 00:17:11.317 "base_bdevs_list": [ 00:17:11.317 { 00:17:11.317 "name": "pt1", 00:17:11.317 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:11.317 "is_configured": true, 00:17:11.317 "data_offset": 2048, 00:17:11.317 "data_size": 63488 00:17:11.317 }, 00:17:11.317 { 00:17:11.317 "name": null, 00:17:11.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:11.317 "is_configured": false, 00:17:11.317 "data_offset": 2048, 00:17:11.317 "data_size": 63488 00:17:11.317 }, 00:17:11.317 { 00:17:11.317 "name": null, 00:17:11.317 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:11.317 "is_configured": false, 00:17:11.317 "data_offset": 2048, 00:17:11.317 "data_size": 63488 00:17:11.317 } 00:17:11.317 ] 00:17:11.317 }' 00:17:11.317 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.317 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.576 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:17:11.576 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:11.576 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.576 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.576 [2024-12-05 20:10:12.940756] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:11.576 [2024-12-05 20:10:12.940817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.576 [2024-12-05 20:10:12.940841] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:11.576 [2024-12-05 20:10:12.940849] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.576 [2024-12-05 20:10:12.941307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.576 [2024-12-05 20:10:12.941343] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:11.576 [2024-12-05 20:10:12.941436] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:11.576 [2024-12-05 20:10:12.941472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:11.576 pt2 00:17:11.576 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.576 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:11.576 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.576 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.576 [2024-12-05 20:10:12.952762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:11.576 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.576 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:11.576 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.577 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.577 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.577 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.577 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:11.577 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.577 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.577 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.577 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.577 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.577 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.577 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.577 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.577 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.577 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.577 "name": "raid_bdev1", 00:17:11.577 "uuid": "7e1d0255-54e7-4450-9f04-36ffd8dc76d7", 00:17:11.577 "strip_size_kb": 64, 00:17:11.577 "state": "configuring", 00:17:11.577 "raid_level": "raid5f", 00:17:11.577 "superblock": true, 00:17:11.577 "num_base_bdevs": 3, 00:17:11.577 "num_base_bdevs_discovered": 1, 00:17:11.577 "num_base_bdevs_operational": 3, 00:17:11.577 "base_bdevs_list": [ 00:17:11.577 { 00:17:11.577 "name": "pt1", 00:17:11.577 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:11.577 "is_configured": true, 00:17:11.577 "data_offset": 2048, 00:17:11.577 "data_size": 63488 00:17:11.577 }, 00:17:11.577 { 00:17:11.577 "name": null, 00:17:11.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:11.577 "is_configured": false, 00:17:11.577 "data_offset": 0, 00:17:11.577 "data_size": 63488 00:17:11.577 }, 00:17:11.577 { 00:17:11.577 "name": null, 00:17:11.577 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:11.577 "is_configured": false, 00:17:11.577 "data_offset": 2048, 00:17:11.577 "data_size": 63488 00:17:11.577 } 00:17:11.577 ] 00:17:11.577 }' 00:17:11.577 20:10:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.577 20:10:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.145 [2024-12-05 20:10:13.380012] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:12.145 [2024-12-05 20:10:13.380070] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.145 [2024-12-05 20:10:13.380086] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:12.145 [2024-12-05 20:10:13.380097] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.145 [2024-12-05 20:10:13.380525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.145 [2024-12-05 20:10:13.380560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:12.145 [2024-12-05 20:10:13.380660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:12.145 [2024-12-05 20:10:13.380688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:12.145 pt2 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.145 [2024-12-05 20:10:13.391989] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:12.145 [2024-12-05 20:10:13.392035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.145 [2024-12-05 20:10:13.392048] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:12.145 [2024-12-05 20:10:13.392057] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.145 [2024-12-05 20:10:13.392424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.145 [2024-12-05 20:10:13.392445] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:12.145 [2024-12-05 20:10:13.392502] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:12.145 [2024-12-05 20:10:13.392522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:12.145 [2024-12-05 20:10:13.392662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:12.145 [2024-12-05 20:10:13.392676] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:12.145 [2024-12-05 20:10:13.392922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:12.145 [2024-12-05 20:10:13.398275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:12.145 [2024-12-05 20:10:13.398298] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:12.145 [2024-12-05 20:10:13.398466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.145 pt3 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.145 "name": "raid_bdev1", 00:17:12.145 "uuid": "7e1d0255-54e7-4450-9f04-36ffd8dc76d7", 00:17:12.145 "strip_size_kb": 64, 00:17:12.145 "state": "online", 00:17:12.145 "raid_level": "raid5f", 00:17:12.145 "superblock": true, 00:17:12.145 "num_base_bdevs": 3, 00:17:12.145 "num_base_bdevs_discovered": 3, 00:17:12.145 "num_base_bdevs_operational": 3, 00:17:12.145 "base_bdevs_list": [ 00:17:12.145 { 00:17:12.145 "name": "pt1", 00:17:12.145 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:12.145 "is_configured": true, 00:17:12.145 "data_offset": 2048, 00:17:12.145 "data_size": 63488 00:17:12.145 }, 00:17:12.145 { 00:17:12.145 "name": "pt2", 00:17:12.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.145 "is_configured": true, 00:17:12.145 "data_offset": 2048, 00:17:12.145 "data_size": 63488 00:17:12.145 }, 00:17:12.145 { 00:17:12.145 "name": "pt3", 00:17:12.145 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:12.145 "is_configured": true, 00:17:12.145 "data_offset": 2048, 00:17:12.145 "data_size": 63488 00:17:12.145 } 00:17:12.145 ] 00:17:12.145 }' 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.145 20:10:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.714 [2024-12-05 20:10:13.868290] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:12.714 "name": "raid_bdev1", 00:17:12.714 "aliases": [ 00:17:12.714 "7e1d0255-54e7-4450-9f04-36ffd8dc76d7" 00:17:12.714 ], 00:17:12.714 "product_name": "Raid Volume", 00:17:12.714 "block_size": 512, 00:17:12.714 "num_blocks": 126976, 00:17:12.714 "uuid": "7e1d0255-54e7-4450-9f04-36ffd8dc76d7", 00:17:12.714 "assigned_rate_limits": { 00:17:12.714 "rw_ios_per_sec": 0, 00:17:12.714 "rw_mbytes_per_sec": 0, 00:17:12.714 "r_mbytes_per_sec": 0, 00:17:12.714 "w_mbytes_per_sec": 0 00:17:12.714 }, 00:17:12.714 "claimed": false, 00:17:12.714 "zoned": false, 00:17:12.714 "supported_io_types": { 00:17:12.714 "read": true, 00:17:12.714 "write": true, 00:17:12.714 "unmap": false, 00:17:12.714 "flush": false, 00:17:12.714 "reset": true, 00:17:12.714 "nvme_admin": false, 00:17:12.714 "nvme_io": false, 00:17:12.714 "nvme_io_md": false, 00:17:12.714 "write_zeroes": true, 00:17:12.714 "zcopy": false, 00:17:12.714 "get_zone_info": false, 00:17:12.714 "zone_management": false, 00:17:12.714 "zone_append": false, 00:17:12.714 "compare": false, 00:17:12.714 "compare_and_write": false, 00:17:12.714 "abort": false, 00:17:12.714 "seek_hole": false, 00:17:12.714 "seek_data": false, 00:17:12.714 "copy": false, 00:17:12.714 "nvme_iov_md": false 00:17:12.714 }, 00:17:12.714 "driver_specific": { 00:17:12.714 "raid": { 00:17:12.714 "uuid": "7e1d0255-54e7-4450-9f04-36ffd8dc76d7", 00:17:12.714 "strip_size_kb": 64, 00:17:12.714 "state": "online", 00:17:12.714 "raid_level": "raid5f", 00:17:12.714 "superblock": true, 00:17:12.714 "num_base_bdevs": 3, 00:17:12.714 "num_base_bdevs_discovered": 3, 00:17:12.714 "num_base_bdevs_operational": 3, 00:17:12.714 "base_bdevs_list": [ 00:17:12.714 { 00:17:12.714 "name": "pt1", 00:17:12.714 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:12.714 "is_configured": true, 00:17:12.714 "data_offset": 2048, 00:17:12.714 "data_size": 63488 00:17:12.714 }, 00:17:12.714 { 00:17:12.714 "name": "pt2", 00:17:12.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.714 "is_configured": true, 00:17:12.714 "data_offset": 2048, 00:17:12.714 "data_size": 63488 00:17:12.714 }, 00:17:12.714 { 00:17:12.714 "name": "pt3", 00:17:12.714 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:12.714 "is_configured": true, 00:17:12.714 "data_offset": 2048, 00:17:12.714 "data_size": 63488 00:17:12.714 } 00:17:12.714 ] 00:17:12.714 } 00:17:12.714 } 00:17:12.714 }' 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:12.714 pt2 00:17:12.714 pt3' 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.714 20:10:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.714 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.714 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:12.714 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:12.714 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:12.714 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:12.714 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.714 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.714 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.714 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.714 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:12.714 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:12.715 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:12.715 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:12.715 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.715 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.715 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.715 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.715 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:12.715 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:12.715 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:12.715 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.715 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:12.715 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.715 [2024-12-05 20:10:14.127775] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.715 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7e1d0255-54e7-4450-9f04-36ffd8dc76d7 '!=' 7e1d0255-54e7-4450-9f04-36ffd8dc76d7 ']' 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.974 [2024-12-05 20:10:14.175568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.974 "name": "raid_bdev1", 00:17:12.974 "uuid": "7e1d0255-54e7-4450-9f04-36ffd8dc76d7", 00:17:12.974 "strip_size_kb": 64, 00:17:12.974 "state": "online", 00:17:12.974 "raid_level": "raid5f", 00:17:12.974 "superblock": true, 00:17:12.974 "num_base_bdevs": 3, 00:17:12.974 "num_base_bdevs_discovered": 2, 00:17:12.974 "num_base_bdevs_operational": 2, 00:17:12.974 "base_bdevs_list": [ 00:17:12.974 { 00:17:12.974 "name": null, 00:17:12.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.974 "is_configured": false, 00:17:12.974 "data_offset": 0, 00:17:12.974 "data_size": 63488 00:17:12.974 }, 00:17:12.974 { 00:17:12.974 "name": "pt2", 00:17:12.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.974 "is_configured": true, 00:17:12.974 "data_offset": 2048, 00:17:12.974 "data_size": 63488 00:17:12.974 }, 00:17:12.974 { 00:17:12.974 "name": "pt3", 00:17:12.974 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:12.974 "is_configured": true, 00:17:12.974 "data_offset": 2048, 00:17:12.974 "data_size": 63488 00:17:12.974 } 00:17:12.974 ] 00:17:12.974 }' 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.974 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.233 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:13.233 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.233 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.233 [2024-12-05 20:10:14.614782] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.233 [2024-12-05 20:10:14.614812] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.234 [2024-12-05 20:10:14.614916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.234 [2024-12-05 20:10:14.614975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.234 [2024-12-05 20:10:14.614989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:13.234 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.234 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.234 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:13.234 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.234 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.234 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.234 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:13.234 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:13.234 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:13.234 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:13.234 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:13.234 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.234 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.493 [2024-12-05 20:10:14.694620] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:13.493 [2024-12-05 20:10:14.694670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.493 [2024-12-05 20:10:14.694685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:13.493 [2024-12-05 20:10:14.694694] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.493 [2024-12-05 20:10:14.696739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.493 [2024-12-05 20:10:14.696772] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:13.493 [2024-12-05 20:10:14.696843] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:13.493 [2024-12-05 20:10:14.696906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:13.493 pt2 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.493 "name": "raid_bdev1", 00:17:13.493 "uuid": "7e1d0255-54e7-4450-9f04-36ffd8dc76d7", 00:17:13.493 "strip_size_kb": 64, 00:17:13.493 "state": "configuring", 00:17:13.493 "raid_level": "raid5f", 00:17:13.493 "superblock": true, 00:17:13.493 "num_base_bdevs": 3, 00:17:13.493 "num_base_bdevs_discovered": 1, 00:17:13.493 "num_base_bdevs_operational": 2, 00:17:13.493 "base_bdevs_list": [ 00:17:13.493 { 00:17:13.493 "name": null, 00:17:13.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.493 "is_configured": false, 00:17:13.493 "data_offset": 2048, 00:17:13.493 "data_size": 63488 00:17:13.493 }, 00:17:13.493 { 00:17:13.493 "name": "pt2", 00:17:13.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.493 "is_configured": true, 00:17:13.493 "data_offset": 2048, 00:17:13.493 "data_size": 63488 00:17:13.493 }, 00:17:13.493 { 00:17:13.493 "name": null, 00:17:13.493 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:13.493 "is_configured": false, 00:17:13.493 "data_offset": 2048, 00:17:13.493 "data_size": 63488 00:17:13.493 } 00:17:13.493 ] 00:17:13.493 }' 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.493 20:10:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.752 [2024-12-05 20:10:15.165837] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:13.752 [2024-12-05 20:10:15.165998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.752 [2024-12-05 20:10:15.166037] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:13.752 [2024-12-05 20:10:15.166068] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.752 [2024-12-05 20:10:15.166566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.752 [2024-12-05 20:10:15.166631] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:13.752 [2024-12-05 20:10:15.166739] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:13.752 [2024-12-05 20:10:15.166795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:13.752 [2024-12-05 20:10:15.166965] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:13.752 [2024-12-05 20:10:15.167007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:13.752 [2024-12-05 20:10:15.167275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:13.752 [2024-12-05 20:10:15.172375] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:13.752 [2024-12-05 20:10:15.172431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:13.752 [2024-12-05 20:10:15.172822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.752 pt3 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.752 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.011 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.011 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.011 "name": "raid_bdev1", 00:17:14.011 "uuid": "7e1d0255-54e7-4450-9f04-36ffd8dc76d7", 00:17:14.011 "strip_size_kb": 64, 00:17:14.011 "state": "online", 00:17:14.011 "raid_level": "raid5f", 00:17:14.011 "superblock": true, 00:17:14.011 "num_base_bdevs": 3, 00:17:14.011 "num_base_bdevs_discovered": 2, 00:17:14.011 "num_base_bdevs_operational": 2, 00:17:14.011 "base_bdevs_list": [ 00:17:14.011 { 00:17:14.011 "name": null, 00:17:14.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.011 "is_configured": false, 00:17:14.011 "data_offset": 2048, 00:17:14.011 "data_size": 63488 00:17:14.011 }, 00:17:14.011 { 00:17:14.011 "name": "pt2", 00:17:14.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.011 "is_configured": true, 00:17:14.011 "data_offset": 2048, 00:17:14.011 "data_size": 63488 00:17:14.011 }, 00:17:14.011 { 00:17:14.011 "name": "pt3", 00:17:14.011 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.011 "is_configured": true, 00:17:14.011 "data_offset": 2048, 00:17:14.011 "data_size": 63488 00:17:14.011 } 00:17:14.011 ] 00:17:14.011 }' 00:17:14.011 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.011 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.270 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:14.270 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.270 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.270 [2024-12-05 20:10:15.622993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.270 [2024-12-05 20:10:15.623026] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.270 [2024-12-05 20:10:15.623099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.270 [2024-12-05 20:10:15.623165] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.270 [2024-12-05 20:10:15.623174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:14.270 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.271 [2024-12-05 20:10:15.694884] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:14.271 [2024-12-05 20:10:15.694951] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.271 [2024-12-05 20:10:15.694970] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:14.271 [2024-12-05 20:10:15.694979] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.271 [2024-12-05 20:10:15.697213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.271 [2024-12-05 20:10:15.697288] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:14.271 [2024-12-05 20:10:15.697376] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:14.271 [2024-12-05 20:10:15.697443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:14.271 [2024-12-05 20:10:15.697598] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:14.271 [2024-12-05 20:10:15.697611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.271 [2024-12-05 20:10:15.697629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:14.271 [2024-12-05 20:10:15.697690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:14.271 pt1 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.271 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.530 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.530 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.530 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.530 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.530 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.530 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.530 "name": "raid_bdev1", 00:17:14.530 "uuid": "7e1d0255-54e7-4450-9f04-36ffd8dc76d7", 00:17:14.530 "strip_size_kb": 64, 00:17:14.530 "state": "configuring", 00:17:14.530 "raid_level": "raid5f", 00:17:14.530 "superblock": true, 00:17:14.530 "num_base_bdevs": 3, 00:17:14.530 "num_base_bdevs_discovered": 1, 00:17:14.530 "num_base_bdevs_operational": 2, 00:17:14.530 "base_bdevs_list": [ 00:17:14.530 { 00:17:14.530 "name": null, 00:17:14.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.530 "is_configured": false, 00:17:14.530 "data_offset": 2048, 00:17:14.530 "data_size": 63488 00:17:14.530 }, 00:17:14.530 { 00:17:14.530 "name": "pt2", 00:17:14.530 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.530 "is_configured": true, 00:17:14.530 "data_offset": 2048, 00:17:14.530 "data_size": 63488 00:17:14.530 }, 00:17:14.530 { 00:17:14.530 "name": null, 00:17:14.530 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.530 "is_configured": false, 00:17:14.530 "data_offset": 2048, 00:17:14.530 "data_size": 63488 00:17:14.530 } 00:17:14.530 ] 00:17:14.530 }' 00:17:14.530 20:10:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.530 20:10:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.789 [2024-12-05 20:10:16.206031] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:14.789 [2024-12-05 20:10:16.206145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.789 [2024-12-05 20:10:16.206185] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:14.789 [2024-12-05 20:10:16.206214] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.789 [2024-12-05 20:10:16.206700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.789 [2024-12-05 20:10:16.206763] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:14.789 [2024-12-05 20:10:16.206869] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:14.789 [2024-12-05 20:10:16.206935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:14.789 [2024-12-05 20:10:16.207097] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:14.789 [2024-12-05 20:10:16.207134] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:14.789 [2024-12-05 20:10:16.207398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:14.789 [2024-12-05 20:10:16.212792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:14.789 [2024-12-05 20:10:16.212856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:14.789 [2024-12-05 20:10:16.213122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.789 pt3 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.789 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.790 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.790 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.790 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.049 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.049 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.049 "name": "raid_bdev1", 00:17:15.049 "uuid": "7e1d0255-54e7-4450-9f04-36ffd8dc76d7", 00:17:15.049 "strip_size_kb": 64, 00:17:15.049 "state": "online", 00:17:15.049 "raid_level": "raid5f", 00:17:15.049 "superblock": true, 00:17:15.049 "num_base_bdevs": 3, 00:17:15.049 "num_base_bdevs_discovered": 2, 00:17:15.049 "num_base_bdevs_operational": 2, 00:17:15.049 "base_bdevs_list": [ 00:17:15.049 { 00:17:15.049 "name": null, 00:17:15.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.049 "is_configured": false, 00:17:15.049 "data_offset": 2048, 00:17:15.049 "data_size": 63488 00:17:15.049 }, 00:17:15.049 { 00:17:15.049 "name": "pt2", 00:17:15.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.049 "is_configured": true, 00:17:15.049 "data_offset": 2048, 00:17:15.049 "data_size": 63488 00:17:15.049 }, 00:17:15.049 { 00:17:15.049 "name": "pt3", 00:17:15.049 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.049 "is_configured": true, 00:17:15.049 "data_offset": 2048, 00:17:15.049 "data_size": 63488 00:17:15.049 } 00:17:15.049 ] 00:17:15.049 }' 00:17:15.049 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.049 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.308 [2024-12-05 20:10:16.675365] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7e1d0255-54e7-4450-9f04-36ffd8dc76d7 '!=' 7e1d0255-54e7-4450-9f04-36ffd8dc76d7 ']' 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81219 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81219 ']' 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81219 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.308 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81219 00:17:15.567 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:15.567 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:15.567 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81219' 00:17:15.567 killing process with pid 81219 00:17:15.567 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81219 00:17:15.567 [2024-12-05 20:10:16.763016] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:15.567 [2024-12-05 20:10:16.763161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.567 20:10:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81219 00:17:15.567 [2024-12-05 20:10:16.763254] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.567 [2024-12-05 20:10:16.763271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:15.827 [2024-12-05 20:10:17.048158] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:16.765 20:10:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:16.765 00:17:16.765 real 0m7.717s 00:17:16.765 user 0m12.131s 00:17:16.765 sys 0m1.391s 00:17:16.765 20:10:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.765 20:10:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.765 ************************************ 00:17:16.765 END TEST raid5f_superblock_test 00:17:16.765 ************************************ 00:17:16.765 20:10:18 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:16.765 20:10:18 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:17:16.765 20:10:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:16.765 20:10:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.765 20:10:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:16.765 ************************************ 00:17:16.765 START TEST raid5f_rebuild_test 00:17:16.765 ************************************ 00:17:16.765 20:10:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:17:16.765 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:16.765 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:16.765 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:16.765 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:16.765 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:16.765 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:16.765 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:16.765 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:16.765 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:16.765 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:16.765 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:16.765 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:16.766 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:16.766 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:16.766 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:16.766 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:16.766 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:16.766 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:16.766 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:16.766 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:17.025 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:17.025 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:17.025 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:17.025 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:17.025 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:17.025 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:17.025 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:17.025 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:17.025 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81658 00:17:17.025 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:17.025 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81658 00:17:17.025 20:10:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81658 ']' 00:17:17.025 20:10:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.025 20:10:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.025 20:10:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.025 20:10:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.025 20:10:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.025 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:17.025 Zero copy mechanism will not be used. 00:17:17.025 [2024-12-05 20:10:18.290552] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:17:17.025 [2024-12-05 20:10:18.290677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81658 ] 00:17:17.284 [2024-12-05 20:10:18.462227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.284 [2024-12-05 20:10:18.570754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.542 [2024-12-05 20:10:18.760365] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.542 [2024-12-05 20:10:18.760404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.801 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.801 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:17.801 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.802 BaseBdev1_malloc 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.802 [2024-12-05 20:10:19.139418] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:17.802 [2024-12-05 20:10:19.139486] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.802 [2024-12-05 20:10:19.139525] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:17.802 [2024-12-05 20:10:19.139536] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.802 [2024-12-05 20:10:19.141572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.802 [2024-12-05 20:10:19.141614] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:17.802 BaseBdev1 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.802 BaseBdev2_malloc 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.802 [2024-12-05 20:10:19.191243] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:17.802 [2024-12-05 20:10:19.191319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.802 [2024-12-05 20:10:19.191340] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:17.802 [2024-12-05 20:10:19.191350] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.802 [2024-12-05 20:10:19.193350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.802 [2024-12-05 20:10:19.193476] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:17.802 BaseBdev2 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.802 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.060 BaseBdev3_malloc 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.060 [2024-12-05 20:10:19.285854] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:18.060 [2024-12-05 20:10:19.285935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.060 [2024-12-05 20:10:19.285958] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:18.060 [2024-12-05 20:10:19.285968] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.060 [2024-12-05 20:10:19.287870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.060 [2024-12-05 20:10:19.287978] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:18.060 BaseBdev3 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.060 spare_malloc 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.060 spare_delay 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.060 [2024-12-05 20:10:19.351114] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:18.060 [2024-12-05 20:10:19.351181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.060 [2024-12-05 20:10:19.351214] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:18.060 [2024-12-05 20:10:19.351224] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.060 [2024-12-05 20:10:19.353231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.060 [2024-12-05 20:10:19.353273] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:18.060 spare 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.060 [2024-12-05 20:10:19.363159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:18.060 [2024-12-05 20:10:19.364834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:18.060 [2024-12-05 20:10:19.364907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:18.060 [2024-12-05 20:10:19.364990] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:18.060 [2024-12-05 20:10:19.365000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:18.060 [2024-12-05 20:10:19.365230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:18.060 [2024-12-05 20:10:19.370708] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:18.060 [2024-12-05 20:10:19.370763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:18.060 [2024-12-05 20:10:19.371002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.060 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.060 "name": "raid_bdev1", 00:17:18.060 "uuid": "ad1a0911-e037-474c-a3d0-ea5433bf1295", 00:17:18.060 "strip_size_kb": 64, 00:17:18.060 "state": "online", 00:17:18.060 "raid_level": "raid5f", 00:17:18.060 "superblock": false, 00:17:18.060 "num_base_bdevs": 3, 00:17:18.060 "num_base_bdevs_discovered": 3, 00:17:18.060 "num_base_bdevs_operational": 3, 00:17:18.060 "base_bdevs_list": [ 00:17:18.060 { 00:17:18.060 "name": "BaseBdev1", 00:17:18.060 "uuid": "9a1c87e8-85f0-5374-9eb2-e84a2878995e", 00:17:18.060 "is_configured": true, 00:17:18.060 "data_offset": 0, 00:17:18.060 "data_size": 65536 00:17:18.060 }, 00:17:18.060 { 00:17:18.061 "name": "BaseBdev2", 00:17:18.061 "uuid": "582c5979-2c50-54e7-81d0-e2406a03ef6b", 00:17:18.061 "is_configured": true, 00:17:18.061 "data_offset": 0, 00:17:18.061 "data_size": 65536 00:17:18.061 }, 00:17:18.061 { 00:17:18.061 "name": "BaseBdev3", 00:17:18.061 "uuid": "20893b2c-4c64-546f-897f-fa6624493210", 00:17:18.061 "is_configured": true, 00:17:18.061 "data_offset": 0, 00:17:18.061 "data_size": 65536 00:17:18.061 } 00:17:18.061 ] 00:17:18.061 }' 00:17:18.061 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.061 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:18.627 [2024-12-05 20:10:19.816993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:18.627 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:18.886 [2024-12-05 20:10:20.072391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:18.886 /dev/nbd0 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:18.886 1+0 records in 00:17:18.886 1+0 records out 00:17:18.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391211 s, 10.5 MB/s 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:18.886 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:17:19.145 512+0 records in 00:17:19.145 512+0 records out 00:17:19.145 67108864 bytes (67 MB, 64 MiB) copied, 0.372606 s, 180 MB/s 00:17:19.145 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:19.145 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:19.145 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:19.145 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:19.145 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:19.145 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:19.145 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:19.404 [2024-12-05 20:10:20.737105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.404 [2024-12-05 20:10:20.752762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.404 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.405 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.405 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.405 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.405 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.405 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.405 "name": "raid_bdev1", 00:17:19.405 "uuid": "ad1a0911-e037-474c-a3d0-ea5433bf1295", 00:17:19.405 "strip_size_kb": 64, 00:17:19.405 "state": "online", 00:17:19.405 "raid_level": "raid5f", 00:17:19.405 "superblock": false, 00:17:19.405 "num_base_bdevs": 3, 00:17:19.405 "num_base_bdevs_discovered": 2, 00:17:19.405 "num_base_bdevs_operational": 2, 00:17:19.405 "base_bdevs_list": [ 00:17:19.405 { 00:17:19.405 "name": null, 00:17:19.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.405 "is_configured": false, 00:17:19.405 "data_offset": 0, 00:17:19.405 "data_size": 65536 00:17:19.405 }, 00:17:19.405 { 00:17:19.405 "name": "BaseBdev2", 00:17:19.405 "uuid": "582c5979-2c50-54e7-81d0-e2406a03ef6b", 00:17:19.405 "is_configured": true, 00:17:19.405 "data_offset": 0, 00:17:19.405 "data_size": 65536 00:17:19.405 }, 00:17:19.405 { 00:17:19.405 "name": "BaseBdev3", 00:17:19.405 "uuid": "20893b2c-4c64-546f-897f-fa6624493210", 00:17:19.405 "is_configured": true, 00:17:19.405 "data_offset": 0, 00:17:19.405 "data_size": 65536 00:17:19.405 } 00:17:19.405 ] 00:17:19.405 }' 00:17:19.405 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.405 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.972 20:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:19.972 20:10:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.972 20:10:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.972 [2024-12-05 20:10:21.188033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.972 [2024-12-05 20:10:21.204386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:17:19.972 20:10:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.972 20:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:19.972 [2024-12-05 20:10:21.211539] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:20.906 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.906 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.906 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.906 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.906 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.906 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.906 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.906 20:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.906 20:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.906 20:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.906 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.906 "name": "raid_bdev1", 00:17:20.906 "uuid": "ad1a0911-e037-474c-a3d0-ea5433bf1295", 00:17:20.906 "strip_size_kb": 64, 00:17:20.906 "state": "online", 00:17:20.906 "raid_level": "raid5f", 00:17:20.906 "superblock": false, 00:17:20.906 "num_base_bdevs": 3, 00:17:20.906 "num_base_bdevs_discovered": 3, 00:17:20.906 "num_base_bdevs_operational": 3, 00:17:20.906 "process": { 00:17:20.906 "type": "rebuild", 00:17:20.906 "target": "spare", 00:17:20.906 "progress": { 00:17:20.906 "blocks": 20480, 00:17:20.906 "percent": 15 00:17:20.906 } 00:17:20.906 }, 00:17:20.906 "base_bdevs_list": [ 00:17:20.906 { 00:17:20.906 "name": "spare", 00:17:20.907 "uuid": "4e284001-9e50-5d46-945a-b5a24588ea91", 00:17:20.907 "is_configured": true, 00:17:20.907 "data_offset": 0, 00:17:20.907 "data_size": 65536 00:17:20.907 }, 00:17:20.907 { 00:17:20.907 "name": "BaseBdev2", 00:17:20.907 "uuid": "582c5979-2c50-54e7-81d0-e2406a03ef6b", 00:17:20.907 "is_configured": true, 00:17:20.907 "data_offset": 0, 00:17:20.907 "data_size": 65536 00:17:20.907 }, 00:17:20.907 { 00:17:20.907 "name": "BaseBdev3", 00:17:20.907 "uuid": "20893b2c-4c64-546f-897f-fa6624493210", 00:17:20.907 "is_configured": true, 00:17:20.907 "data_offset": 0, 00:17:20.907 "data_size": 65536 00:17:20.907 } 00:17:20.907 ] 00:17:20.907 }' 00:17:20.907 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.907 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.907 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.907 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.907 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:20.907 20:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.907 20:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.165 [2024-12-05 20:10:22.342284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.165 [2024-12-05 20:10:22.419740] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:21.165 [2024-12-05 20:10:22.419799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.165 [2024-12-05 20:10:22.419836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.165 [2024-12-05 20:10:22.419843] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.165 "name": "raid_bdev1", 00:17:21.165 "uuid": "ad1a0911-e037-474c-a3d0-ea5433bf1295", 00:17:21.165 "strip_size_kb": 64, 00:17:21.165 "state": "online", 00:17:21.165 "raid_level": "raid5f", 00:17:21.165 "superblock": false, 00:17:21.165 "num_base_bdevs": 3, 00:17:21.165 "num_base_bdevs_discovered": 2, 00:17:21.165 "num_base_bdevs_operational": 2, 00:17:21.165 "base_bdevs_list": [ 00:17:21.165 { 00:17:21.165 "name": null, 00:17:21.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.165 "is_configured": false, 00:17:21.165 "data_offset": 0, 00:17:21.165 "data_size": 65536 00:17:21.165 }, 00:17:21.165 { 00:17:21.165 "name": "BaseBdev2", 00:17:21.165 "uuid": "582c5979-2c50-54e7-81d0-e2406a03ef6b", 00:17:21.165 "is_configured": true, 00:17:21.165 "data_offset": 0, 00:17:21.165 "data_size": 65536 00:17:21.165 }, 00:17:21.165 { 00:17:21.165 "name": "BaseBdev3", 00:17:21.165 "uuid": "20893b2c-4c64-546f-897f-fa6624493210", 00:17:21.165 "is_configured": true, 00:17:21.165 "data_offset": 0, 00:17:21.165 "data_size": 65536 00:17:21.165 } 00:17:21.165 ] 00:17:21.165 }' 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.165 20:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.732 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.732 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.732 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.732 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.732 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.732 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.732 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.732 20:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.732 20:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.732 20:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.732 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.732 "name": "raid_bdev1", 00:17:21.732 "uuid": "ad1a0911-e037-474c-a3d0-ea5433bf1295", 00:17:21.732 "strip_size_kb": 64, 00:17:21.732 "state": "online", 00:17:21.732 "raid_level": "raid5f", 00:17:21.732 "superblock": false, 00:17:21.732 "num_base_bdevs": 3, 00:17:21.732 "num_base_bdevs_discovered": 2, 00:17:21.732 "num_base_bdevs_operational": 2, 00:17:21.732 "base_bdevs_list": [ 00:17:21.732 { 00:17:21.732 "name": null, 00:17:21.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.732 "is_configured": false, 00:17:21.732 "data_offset": 0, 00:17:21.732 "data_size": 65536 00:17:21.732 }, 00:17:21.732 { 00:17:21.732 "name": "BaseBdev2", 00:17:21.732 "uuid": "582c5979-2c50-54e7-81d0-e2406a03ef6b", 00:17:21.732 "is_configured": true, 00:17:21.732 "data_offset": 0, 00:17:21.732 "data_size": 65536 00:17:21.732 }, 00:17:21.732 { 00:17:21.732 "name": "BaseBdev3", 00:17:21.732 "uuid": "20893b2c-4c64-546f-897f-fa6624493210", 00:17:21.732 "is_configured": true, 00:17:21.732 "data_offset": 0, 00:17:21.733 "data_size": 65536 00:17:21.733 } 00:17:21.733 ] 00:17:21.733 }' 00:17:21.733 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.733 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.733 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.733 20:10:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.733 20:10:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:21.733 20:10:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.733 20:10:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.733 [2024-12-05 20:10:23.014691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:21.733 [2024-12-05 20:10:23.031028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:21.733 20:10:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.733 20:10:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:21.733 [2024-12-05 20:10:23.038716] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:22.696 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.696 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.696 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.696 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.696 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.696 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.696 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.696 20:10:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.696 20:10:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.696 20:10:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.696 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.696 "name": "raid_bdev1", 00:17:22.696 "uuid": "ad1a0911-e037-474c-a3d0-ea5433bf1295", 00:17:22.696 "strip_size_kb": 64, 00:17:22.696 "state": "online", 00:17:22.696 "raid_level": "raid5f", 00:17:22.696 "superblock": false, 00:17:22.696 "num_base_bdevs": 3, 00:17:22.696 "num_base_bdevs_discovered": 3, 00:17:22.696 "num_base_bdevs_operational": 3, 00:17:22.696 "process": { 00:17:22.696 "type": "rebuild", 00:17:22.696 "target": "spare", 00:17:22.696 "progress": { 00:17:22.696 "blocks": 20480, 00:17:22.696 "percent": 15 00:17:22.696 } 00:17:22.696 }, 00:17:22.696 "base_bdevs_list": [ 00:17:22.696 { 00:17:22.696 "name": "spare", 00:17:22.696 "uuid": "4e284001-9e50-5d46-945a-b5a24588ea91", 00:17:22.696 "is_configured": true, 00:17:22.696 "data_offset": 0, 00:17:22.696 "data_size": 65536 00:17:22.696 }, 00:17:22.696 { 00:17:22.696 "name": "BaseBdev2", 00:17:22.696 "uuid": "582c5979-2c50-54e7-81d0-e2406a03ef6b", 00:17:22.696 "is_configured": true, 00:17:22.696 "data_offset": 0, 00:17:22.696 "data_size": 65536 00:17:22.696 }, 00:17:22.696 { 00:17:22.696 "name": "BaseBdev3", 00:17:22.696 "uuid": "20893b2c-4c64-546f-897f-fa6624493210", 00:17:22.696 "is_configured": true, 00:17:22.696 "data_offset": 0, 00:17:22.696 "data_size": 65536 00:17:22.696 } 00:17:22.696 ] 00:17:22.696 }' 00:17:22.696 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=546 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.958 "name": "raid_bdev1", 00:17:22.958 "uuid": "ad1a0911-e037-474c-a3d0-ea5433bf1295", 00:17:22.958 "strip_size_kb": 64, 00:17:22.958 "state": "online", 00:17:22.958 "raid_level": "raid5f", 00:17:22.958 "superblock": false, 00:17:22.958 "num_base_bdevs": 3, 00:17:22.958 "num_base_bdevs_discovered": 3, 00:17:22.958 "num_base_bdevs_operational": 3, 00:17:22.958 "process": { 00:17:22.958 "type": "rebuild", 00:17:22.958 "target": "spare", 00:17:22.958 "progress": { 00:17:22.958 "blocks": 22528, 00:17:22.958 "percent": 17 00:17:22.958 } 00:17:22.958 }, 00:17:22.958 "base_bdevs_list": [ 00:17:22.958 { 00:17:22.958 "name": "spare", 00:17:22.958 "uuid": "4e284001-9e50-5d46-945a-b5a24588ea91", 00:17:22.958 "is_configured": true, 00:17:22.958 "data_offset": 0, 00:17:22.958 "data_size": 65536 00:17:22.958 }, 00:17:22.958 { 00:17:22.958 "name": "BaseBdev2", 00:17:22.958 "uuid": "582c5979-2c50-54e7-81d0-e2406a03ef6b", 00:17:22.958 "is_configured": true, 00:17:22.958 "data_offset": 0, 00:17:22.958 "data_size": 65536 00:17:22.958 }, 00:17:22.958 { 00:17:22.958 "name": "BaseBdev3", 00:17:22.958 "uuid": "20893b2c-4c64-546f-897f-fa6624493210", 00:17:22.958 "is_configured": true, 00:17:22.958 "data_offset": 0, 00:17:22.958 "data_size": 65536 00:17:22.958 } 00:17:22.958 ] 00:17:22.958 }' 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.958 20:10:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:24.334 20:10:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:24.334 20:10:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.334 20:10:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.334 20:10:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.334 20:10:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.334 20:10:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.334 20:10:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.334 20:10:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.334 20:10:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.334 20:10:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.334 20:10:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.334 20:10:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.334 "name": "raid_bdev1", 00:17:24.334 "uuid": "ad1a0911-e037-474c-a3d0-ea5433bf1295", 00:17:24.334 "strip_size_kb": 64, 00:17:24.334 "state": "online", 00:17:24.334 "raid_level": "raid5f", 00:17:24.334 "superblock": false, 00:17:24.334 "num_base_bdevs": 3, 00:17:24.334 "num_base_bdevs_discovered": 3, 00:17:24.334 "num_base_bdevs_operational": 3, 00:17:24.334 "process": { 00:17:24.334 "type": "rebuild", 00:17:24.334 "target": "spare", 00:17:24.334 "progress": { 00:17:24.334 "blocks": 45056, 00:17:24.334 "percent": 34 00:17:24.334 } 00:17:24.334 }, 00:17:24.334 "base_bdevs_list": [ 00:17:24.334 { 00:17:24.334 "name": "spare", 00:17:24.334 "uuid": "4e284001-9e50-5d46-945a-b5a24588ea91", 00:17:24.334 "is_configured": true, 00:17:24.334 "data_offset": 0, 00:17:24.334 "data_size": 65536 00:17:24.334 }, 00:17:24.334 { 00:17:24.334 "name": "BaseBdev2", 00:17:24.334 "uuid": "582c5979-2c50-54e7-81d0-e2406a03ef6b", 00:17:24.334 "is_configured": true, 00:17:24.334 "data_offset": 0, 00:17:24.334 "data_size": 65536 00:17:24.334 }, 00:17:24.334 { 00:17:24.334 "name": "BaseBdev3", 00:17:24.334 "uuid": "20893b2c-4c64-546f-897f-fa6624493210", 00:17:24.334 "is_configured": true, 00:17:24.334 "data_offset": 0, 00:17:24.334 "data_size": 65536 00:17:24.334 } 00:17:24.334 ] 00:17:24.334 }' 00:17:24.334 20:10:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.334 20:10:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.334 20:10:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.334 20:10:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.334 20:10:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:25.339 20:10:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:25.339 20:10:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.339 20:10:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.339 20:10:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.339 20:10:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.339 20:10:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.339 20:10:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.339 20:10:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.339 20:10:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.339 20:10:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.339 20:10:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.339 20:10:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.339 "name": "raid_bdev1", 00:17:25.339 "uuid": "ad1a0911-e037-474c-a3d0-ea5433bf1295", 00:17:25.339 "strip_size_kb": 64, 00:17:25.339 "state": "online", 00:17:25.339 "raid_level": "raid5f", 00:17:25.339 "superblock": false, 00:17:25.339 "num_base_bdevs": 3, 00:17:25.339 "num_base_bdevs_discovered": 3, 00:17:25.339 "num_base_bdevs_operational": 3, 00:17:25.339 "process": { 00:17:25.339 "type": "rebuild", 00:17:25.339 "target": "spare", 00:17:25.339 "progress": { 00:17:25.339 "blocks": 69632, 00:17:25.339 "percent": 53 00:17:25.339 } 00:17:25.339 }, 00:17:25.339 "base_bdevs_list": [ 00:17:25.339 { 00:17:25.339 "name": "spare", 00:17:25.339 "uuid": "4e284001-9e50-5d46-945a-b5a24588ea91", 00:17:25.339 "is_configured": true, 00:17:25.339 "data_offset": 0, 00:17:25.339 "data_size": 65536 00:17:25.339 }, 00:17:25.339 { 00:17:25.339 "name": "BaseBdev2", 00:17:25.339 "uuid": "582c5979-2c50-54e7-81d0-e2406a03ef6b", 00:17:25.339 "is_configured": true, 00:17:25.339 "data_offset": 0, 00:17:25.339 "data_size": 65536 00:17:25.339 }, 00:17:25.339 { 00:17:25.339 "name": "BaseBdev3", 00:17:25.339 "uuid": "20893b2c-4c64-546f-897f-fa6624493210", 00:17:25.339 "is_configured": true, 00:17:25.339 "data_offset": 0, 00:17:25.339 "data_size": 65536 00:17:25.339 } 00:17:25.339 ] 00:17:25.339 }' 00:17:25.339 20:10:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.339 20:10:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:25.339 20:10:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.339 20:10:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.339 20:10:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:26.276 20:10:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:26.276 20:10:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.276 20:10:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.276 20:10:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.276 20:10:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.276 20:10:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.276 20:10:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.276 20:10:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.276 20:10:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.276 20:10:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.276 20:10:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.276 20:10:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.276 "name": "raid_bdev1", 00:17:26.276 "uuid": "ad1a0911-e037-474c-a3d0-ea5433bf1295", 00:17:26.276 "strip_size_kb": 64, 00:17:26.276 "state": "online", 00:17:26.276 "raid_level": "raid5f", 00:17:26.276 "superblock": false, 00:17:26.276 "num_base_bdevs": 3, 00:17:26.276 "num_base_bdevs_discovered": 3, 00:17:26.276 "num_base_bdevs_operational": 3, 00:17:26.276 "process": { 00:17:26.276 "type": "rebuild", 00:17:26.276 "target": "spare", 00:17:26.276 "progress": { 00:17:26.277 "blocks": 92160, 00:17:26.277 "percent": 70 00:17:26.277 } 00:17:26.277 }, 00:17:26.277 "base_bdevs_list": [ 00:17:26.277 { 00:17:26.277 "name": "spare", 00:17:26.277 "uuid": "4e284001-9e50-5d46-945a-b5a24588ea91", 00:17:26.277 "is_configured": true, 00:17:26.277 "data_offset": 0, 00:17:26.277 "data_size": 65536 00:17:26.277 }, 00:17:26.277 { 00:17:26.277 "name": "BaseBdev2", 00:17:26.277 "uuid": "582c5979-2c50-54e7-81d0-e2406a03ef6b", 00:17:26.277 "is_configured": true, 00:17:26.277 "data_offset": 0, 00:17:26.277 "data_size": 65536 00:17:26.277 }, 00:17:26.277 { 00:17:26.277 "name": "BaseBdev3", 00:17:26.277 "uuid": "20893b2c-4c64-546f-897f-fa6624493210", 00:17:26.277 "is_configured": true, 00:17:26.277 "data_offset": 0, 00:17:26.277 "data_size": 65536 00:17:26.277 } 00:17:26.277 ] 00:17:26.277 }' 00:17:26.277 20:10:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.535 20:10:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.535 20:10:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.535 20:10:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.535 20:10:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:27.472 20:10:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:27.473 20:10:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.473 20:10:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.473 20:10:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.473 20:10:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.473 20:10:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.473 20:10:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.473 20:10:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.473 20:10:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.473 20:10:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.473 20:10:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.473 20:10:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.473 "name": "raid_bdev1", 00:17:27.473 "uuid": "ad1a0911-e037-474c-a3d0-ea5433bf1295", 00:17:27.473 "strip_size_kb": 64, 00:17:27.473 "state": "online", 00:17:27.473 "raid_level": "raid5f", 00:17:27.473 "superblock": false, 00:17:27.473 "num_base_bdevs": 3, 00:17:27.473 "num_base_bdevs_discovered": 3, 00:17:27.473 "num_base_bdevs_operational": 3, 00:17:27.473 "process": { 00:17:27.473 "type": "rebuild", 00:17:27.473 "target": "spare", 00:17:27.473 "progress": { 00:17:27.473 "blocks": 116736, 00:17:27.473 "percent": 89 00:17:27.473 } 00:17:27.473 }, 00:17:27.473 "base_bdevs_list": [ 00:17:27.473 { 00:17:27.473 "name": "spare", 00:17:27.473 "uuid": "4e284001-9e50-5d46-945a-b5a24588ea91", 00:17:27.473 "is_configured": true, 00:17:27.473 "data_offset": 0, 00:17:27.473 "data_size": 65536 00:17:27.473 }, 00:17:27.473 { 00:17:27.473 "name": "BaseBdev2", 00:17:27.473 "uuid": "582c5979-2c50-54e7-81d0-e2406a03ef6b", 00:17:27.473 "is_configured": true, 00:17:27.473 "data_offset": 0, 00:17:27.473 "data_size": 65536 00:17:27.473 }, 00:17:27.473 { 00:17:27.473 "name": "BaseBdev3", 00:17:27.473 "uuid": "20893b2c-4c64-546f-897f-fa6624493210", 00:17:27.473 "is_configured": true, 00:17:27.473 "data_offset": 0, 00:17:27.473 "data_size": 65536 00:17:27.473 } 00:17:27.473 ] 00:17:27.473 }' 00:17:27.473 20:10:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.473 20:10:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.473 20:10:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.732 20:10:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.732 20:10:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:28.300 [2024-12-05 20:10:29.478088] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:28.300 [2024-12-05 20:10:29.478215] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:28.300 [2024-12-05 20:10:29.478277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.559 20:10:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.559 20:10:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.559 20:10:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.559 20:10:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.559 20:10:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.559 20:10:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.559 20:10:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.559 20:10:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.559 20:10:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.559 20:10:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.559 20:10:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.559 20:10:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.559 "name": "raid_bdev1", 00:17:28.559 "uuid": "ad1a0911-e037-474c-a3d0-ea5433bf1295", 00:17:28.559 "strip_size_kb": 64, 00:17:28.559 "state": "online", 00:17:28.559 "raid_level": "raid5f", 00:17:28.559 "superblock": false, 00:17:28.559 "num_base_bdevs": 3, 00:17:28.559 "num_base_bdevs_discovered": 3, 00:17:28.559 "num_base_bdevs_operational": 3, 00:17:28.559 "base_bdevs_list": [ 00:17:28.559 { 00:17:28.559 "name": "spare", 00:17:28.559 "uuid": "4e284001-9e50-5d46-945a-b5a24588ea91", 00:17:28.559 "is_configured": true, 00:17:28.559 "data_offset": 0, 00:17:28.559 "data_size": 65536 00:17:28.559 }, 00:17:28.559 { 00:17:28.559 "name": "BaseBdev2", 00:17:28.559 "uuid": "582c5979-2c50-54e7-81d0-e2406a03ef6b", 00:17:28.559 "is_configured": true, 00:17:28.559 "data_offset": 0, 00:17:28.559 "data_size": 65536 00:17:28.559 }, 00:17:28.559 { 00:17:28.559 "name": "BaseBdev3", 00:17:28.559 "uuid": "20893b2c-4c64-546f-897f-fa6624493210", 00:17:28.559 "is_configured": true, 00:17:28.559 "data_offset": 0, 00:17:28.559 "data_size": 65536 00:17:28.559 } 00:17:28.559 ] 00:17:28.559 }' 00:17:28.559 20:10:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.818 "name": "raid_bdev1", 00:17:28.818 "uuid": "ad1a0911-e037-474c-a3d0-ea5433bf1295", 00:17:28.818 "strip_size_kb": 64, 00:17:28.818 "state": "online", 00:17:28.818 "raid_level": "raid5f", 00:17:28.818 "superblock": false, 00:17:28.818 "num_base_bdevs": 3, 00:17:28.818 "num_base_bdevs_discovered": 3, 00:17:28.818 "num_base_bdevs_operational": 3, 00:17:28.818 "base_bdevs_list": [ 00:17:28.818 { 00:17:28.818 "name": "spare", 00:17:28.818 "uuid": "4e284001-9e50-5d46-945a-b5a24588ea91", 00:17:28.818 "is_configured": true, 00:17:28.818 "data_offset": 0, 00:17:28.818 "data_size": 65536 00:17:28.818 }, 00:17:28.818 { 00:17:28.818 "name": "BaseBdev2", 00:17:28.818 "uuid": "582c5979-2c50-54e7-81d0-e2406a03ef6b", 00:17:28.818 "is_configured": true, 00:17:28.818 "data_offset": 0, 00:17:28.818 "data_size": 65536 00:17:28.818 }, 00:17:28.818 { 00:17:28.818 "name": "BaseBdev3", 00:17:28.818 "uuid": "20893b2c-4c64-546f-897f-fa6624493210", 00:17:28.818 "is_configured": true, 00:17:28.818 "data_offset": 0, 00:17:28.818 "data_size": 65536 00:17:28.818 } 00:17:28.818 ] 00:17:28.818 }' 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.818 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.077 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.077 "name": "raid_bdev1", 00:17:29.077 "uuid": "ad1a0911-e037-474c-a3d0-ea5433bf1295", 00:17:29.077 "strip_size_kb": 64, 00:17:29.077 "state": "online", 00:17:29.077 "raid_level": "raid5f", 00:17:29.077 "superblock": false, 00:17:29.077 "num_base_bdevs": 3, 00:17:29.077 "num_base_bdevs_discovered": 3, 00:17:29.077 "num_base_bdevs_operational": 3, 00:17:29.077 "base_bdevs_list": [ 00:17:29.077 { 00:17:29.077 "name": "spare", 00:17:29.077 "uuid": "4e284001-9e50-5d46-945a-b5a24588ea91", 00:17:29.077 "is_configured": true, 00:17:29.077 "data_offset": 0, 00:17:29.077 "data_size": 65536 00:17:29.077 }, 00:17:29.077 { 00:17:29.077 "name": "BaseBdev2", 00:17:29.077 "uuid": "582c5979-2c50-54e7-81d0-e2406a03ef6b", 00:17:29.077 "is_configured": true, 00:17:29.077 "data_offset": 0, 00:17:29.077 "data_size": 65536 00:17:29.077 }, 00:17:29.077 { 00:17:29.077 "name": "BaseBdev3", 00:17:29.077 "uuid": "20893b2c-4c64-546f-897f-fa6624493210", 00:17:29.077 "is_configured": true, 00:17:29.077 "data_offset": 0, 00:17:29.077 "data_size": 65536 00:17:29.077 } 00:17:29.077 ] 00:17:29.077 }' 00:17:29.077 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.077 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.338 [2024-12-05 20:10:30.669961] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:29.338 [2024-12-05 20:10:30.670051] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.338 [2024-12-05 20:10:30.670153] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.338 [2024-12-05 20:10:30.670263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.338 [2024-12-05 20:10:30.670318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:29.338 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:29.598 /dev/nbd0 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:29.598 1+0 records in 00:17:29.598 1+0 records out 00:17:29.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380339 s, 10.8 MB/s 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:29.598 20:10:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:29.857 /dev/nbd1 00:17:29.857 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:29.857 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:29.857 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:29.857 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:29.857 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:29.857 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:29.857 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:29.857 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:29.857 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:29.857 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:29.857 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:29.857 1+0 records in 00:17:29.857 1+0 records out 00:17:29.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435087 s, 9.4 MB/s 00:17:29.857 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.857 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:29.857 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.857 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:29.857 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:29.857 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:29.858 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:29.858 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:30.117 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:30.117 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:30.117 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:30.117 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:30.117 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:30.117 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.117 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:30.376 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:30.376 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:30.376 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:30.376 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.376 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.376 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:30.376 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:30.376 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.376 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.376 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81658 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81658 ']' 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81658 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81658 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.636 killing process with pid 81658 00:17:30.636 Received shutdown signal, test time was about 60.000000 seconds 00:17:30.636 00:17:30.636 Latency(us) 00:17:30.636 [2024-12-05T20:10:32.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.636 [2024-12-05T20:10:32.073Z] =================================================================================================================== 00:17:30.636 [2024-12-05T20:10:32.073Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81658' 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81658 00:17:30.636 [2024-12-05 20:10:31.878601] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:30.636 20:10:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81658 00:17:30.896 [2024-12-05 20:10:32.251971] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:32.277 00:17:32.277 real 0m15.102s 00:17:32.277 user 0m18.525s 00:17:32.277 sys 0m1.935s 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.277 ************************************ 00:17:32.277 END TEST raid5f_rebuild_test 00:17:32.277 ************************************ 00:17:32.277 20:10:33 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:17:32.277 20:10:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:32.277 20:10:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.277 20:10:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:32.277 ************************************ 00:17:32.277 START TEST raid5f_rebuild_test_sb 00:17:32.277 ************************************ 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82099 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82099 00:17:32.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82099 ']' 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.277 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.277 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:32.277 Zero copy mechanism will not be used. 00:17:32.277 [2024-12-05 20:10:33.494071] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:17:32.277 [2024-12-05 20:10:33.494212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82099 ] 00:17:32.277 [2024-12-05 20:10:33.676015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.537 [2024-12-05 20:10:33.784597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.797 [2024-12-05 20:10:33.979077] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.797 [2024-12-05 20:10:33.979112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.058 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.058 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:33.058 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:33.058 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:33.058 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.058 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.058 BaseBdev1_malloc 00:17:33.058 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.058 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:33.058 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.058 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.058 [2024-12-05 20:10:34.452127] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:33.058 [2024-12-05 20:10:34.452189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.058 [2024-12-05 20:10:34.452210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:33.058 [2024-12-05 20:10:34.452220] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.058 [2024-12-05 20:10:34.454288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.058 [2024-12-05 20:10:34.454327] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:33.058 BaseBdev1 00:17:33.058 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.058 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:33.058 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:33.058 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.058 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.319 BaseBdev2_malloc 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.319 [2024-12-05 20:10:34.504569] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:33.319 [2024-12-05 20:10:34.504644] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.319 [2024-12-05 20:10:34.504668] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:33.319 [2024-12-05 20:10:34.504678] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.319 [2024-12-05 20:10:34.506700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.319 [2024-12-05 20:10:34.506737] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:33.319 BaseBdev2 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.319 BaseBdev3_malloc 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.319 [2024-12-05 20:10:34.570611] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:33.319 [2024-12-05 20:10:34.570660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.319 [2024-12-05 20:10:34.570696] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:33.319 [2024-12-05 20:10:34.570707] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.319 [2024-12-05 20:10:34.572684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.319 [2024-12-05 20:10:34.572764] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:33.319 BaseBdev3 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.319 spare_malloc 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.319 spare_delay 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.319 [2024-12-05 20:10:34.635946] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:33.319 [2024-12-05 20:10:34.635994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.319 [2024-12-05 20:10:34.636010] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:33.319 [2024-12-05 20:10:34.636019] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.319 [2024-12-05 20:10:34.638059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.319 [2024-12-05 20:10:34.638147] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:33.319 spare 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.319 [2024-12-05 20:10:34.647989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:33.319 [2024-12-05 20:10:34.649697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:33.319 [2024-12-05 20:10:34.649760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:33.319 [2024-12-05 20:10:34.649948] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:33.319 [2024-12-05 20:10:34.649961] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:33.319 [2024-12-05 20:10:34.650185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:33.319 [2024-12-05 20:10:34.655801] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:33.319 [2024-12-05 20:10:34.655857] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:33.319 [2024-12-05 20:10:34.656099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.319 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:33.320 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.320 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.320 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.320 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.320 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.320 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.320 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.320 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.320 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.320 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.320 "name": "raid_bdev1", 00:17:33.320 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:33.320 "strip_size_kb": 64, 00:17:33.320 "state": "online", 00:17:33.320 "raid_level": "raid5f", 00:17:33.320 "superblock": true, 00:17:33.320 "num_base_bdevs": 3, 00:17:33.320 "num_base_bdevs_discovered": 3, 00:17:33.320 "num_base_bdevs_operational": 3, 00:17:33.320 "base_bdevs_list": [ 00:17:33.320 { 00:17:33.320 "name": "BaseBdev1", 00:17:33.320 "uuid": "9b2424ee-d0b3-5d5a-b38b-a721ebb0d056", 00:17:33.320 "is_configured": true, 00:17:33.320 "data_offset": 2048, 00:17:33.320 "data_size": 63488 00:17:33.320 }, 00:17:33.320 { 00:17:33.320 "name": "BaseBdev2", 00:17:33.320 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:33.320 "is_configured": true, 00:17:33.320 "data_offset": 2048, 00:17:33.320 "data_size": 63488 00:17:33.320 }, 00:17:33.320 { 00:17:33.320 "name": "BaseBdev3", 00:17:33.320 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:33.320 "is_configured": true, 00:17:33.320 "data_offset": 2048, 00:17:33.320 "data_size": 63488 00:17:33.320 } 00:17:33.320 ] 00:17:33.320 }' 00:17:33.320 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.320 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.886 [2024-12-05 20:10:35.093683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:33.886 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:34.144 [2024-12-05 20:10:35.373073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:34.144 /dev/nbd0 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:34.144 1+0 records in 00:17:34.144 1+0 records out 00:17:34.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547303 s, 7.5 MB/s 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:34.144 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:17:34.403 496+0 records in 00:17:34.403 496+0 records out 00:17:34.403 65011712 bytes (65 MB, 62 MiB) copied, 0.358583 s, 181 MB/s 00:17:34.403 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:34.403 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:34.403 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:34.403 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:34.403 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:34.403 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:34.403 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:34.662 [2024-12-05 20:10:36.030972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.662 [2024-12-05 20:10:36.050725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.662 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.919 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.919 "name": "raid_bdev1", 00:17:34.919 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:34.919 "strip_size_kb": 64, 00:17:34.919 "state": "online", 00:17:34.919 "raid_level": "raid5f", 00:17:34.919 "superblock": true, 00:17:34.919 "num_base_bdevs": 3, 00:17:34.919 "num_base_bdevs_discovered": 2, 00:17:34.919 "num_base_bdevs_operational": 2, 00:17:34.919 "base_bdevs_list": [ 00:17:34.919 { 00:17:34.919 "name": null, 00:17:34.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.919 "is_configured": false, 00:17:34.919 "data_offset": 0, 00:17:34.919 "data_size": 63488 00:17:34.919 }, 00:17:34.919 { 00:17:34.919 "name": "BaseBdev2", 00:17:34.919 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:34.919 "is_configured": true, 00:17:34.919 "data_offset": 2048, 00:17:34.919 "data_size": 63488 00:17:34.919 }, 00:17:34.919 { 00:17:34.919 "name": "BaseBdev3", 00:17:34.919 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:34.919 "is_configured": true, 00:17:34.919 "data_offset": 2048, 00:17:34.919 "data_size": 63488 00:17:34.919 } 00:17:34.919 ] 00:17:34.919 }' 00:17:34.919 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.919 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.177 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:35.177 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.177 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.177 [2024-12-05 20:10:36.465999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.177 [2024-12-05 20:10:36.482748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:17:35.177 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.177 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:35.177 [2024-12-05 20:10:36.489806] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:36.141 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.141 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.141 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.141 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.141 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.141 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.141 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.141 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.141 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.141 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.141 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.141 "name": "raid_bdev1", 00:17:36.141 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:36.141 "strip_size_kb": 64, 00:17:36.141 "state": "online", 00:17:36.141 "raid_level": "raid5f", 00:17:36.141 "superblock": true, 00:17:36.141 "num_base_bdevs": 3, 00:17:36.141 "num_base_bdevs_discovered": 3, 00:17:36.141 "num_base_bdevs_operational": 3, 00:17:36.141 "process": { 00:17:36.141 "type": "rebuild", 00:17:36.141 "target": "spare", 00:17:36.141 "progress": { 00:17:36.141 "blocks": 20480, 00:17:36.141 "percent": 16 00:17:36.141 } 00:17:36.141 }, 00:17:36.141 "base_bdevs_list": [ 00:17:36.141 { 00:17:36.141 "name": "spare", 00:17:36.141 "uuid": "df213266-abf2-5176-8708-7b97ed9bed64", 00:17:36.141 "is_configured": true, 00:17:36.141 "data_offset": 2048, 00:17:36.141 "data_size": 63488 00:17:36.141 }, 00:17:36.141 { 00:17:36.141 "name": "BaseBdev2", 00:17:36.141 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:36.141 "is_configured": true, 00:17:36.141 "data_offset": 2048, 00:17:36.141 "data_size": 63488 00:17:36.141 }, 00:17:36.142 { 00:17:36.142 "name": "BaseBdev3", 00:17:36.142 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:36.142 "is_configured": true, 00:17:36.142 "data_offset": 2048, 00:17:36.142 "data_size": 63488 00:17:36.142 } 00:17:36.142 ] 00:17:36.142 }' 00:17:36.142 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.400 [2024-12-05 20:10:37.645017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.400 [2024-12-05 20:10:37.697970] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:36.400 [2024-12-05 20:10:37.698051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.400 [2024-12-05 20:10:37.698069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.400 [2024-12-05 20:10:37.698076] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.400 "name": "raid_bdev1", 00:17:36.400 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:36.400 "strip_size_kb": 64, 00:17:36.400 "state": "online", 00:17:36.400 "raid_level": "raid5f", 00:17:36.400 "superblock": true, 00:17:36.400 "num_base_bdevs": 3, 00:17:36.400 "num_base_bdevs_discovered": 2, 00:17:36.400 "num_base_bdevs_operational": 2, 00:17:36.400 "base_bdevs_list": [ 00:17:36.400 { 00:17:36.400 "name": null, 00:17:36.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.400 "is_configured": false, 00:17:36.400 "data_offset": 0, 00:17:36.400 "data_size": 63488 00:17:36.400 }, 00:17:36.400 { 00:17:36.400 "name": "BaseBdev2", 00:17:36.400 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:36.400 "is_configured": true, 00:17:36.400 "data_offset": 2048, 00:17:36.400 "data_size": 63488 00:17:36.400 }, 00:17:36.400 { 00:17:36.400 "name": "BaseBdev3", 00:17:36.400 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:36.400 "is_configured": true, 00:17:36.400 "data_offset": 2048, 00:17:36.400 "data_size": 63488 00:17:36.400 } 00:17:36.400 ] 00:17:36.400 }' 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.400 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.966 "name": "raid_bdev1", 00:17:36.966 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:36.966 "strip_size_kb": 64, 00:17:36.966 "state": "online", 00:17:36.966 "raid_level": "raid5f", 00:17:36.966 "superblock": true, 00:17:36.966 "num_base_bdevs": 3, 00:17:36.966 "num_base_bdevs_discovered": 2, 00:17:36.966 "num_base_bdevs_operational": 2, 00:17:36.966 "base_bdevs_list": [ 00:17:36.966 { 00:17:36.966 "name": null, 00:17:36.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.966 "is_configured": false, 00:17:36.966 "data_offset": 0, 00:17:36.966 "data_size": 63488 00:17:36.966 }, 00:17:36.966 { 00:17:36.966 "name": "BaseBdev2", 00:17:36.966 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:36.966 "is_configured": true, 00:17:36.966 "data_offset": 2048, 00:17:36.966 "data_size": 63488 00:17:36.966 }, 00:17:36.966 { 00:17:36.966 "name": "BaseBdev3", 00:17:36.966 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:36.966 "is_configured": true, 00:17:36.966 "data_offset": 2048, 00:17:36.966 "data_size": 63488 00:17:36.966 } 00:17:36.966 ] 00:17:36.966 }' 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.966 [2024-12-05 20:10:38.267278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:36.966 [2024-12-05 20:10:38.282446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.966 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:36.966 [2024-12-05 20:10:38.289398] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.903 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.903 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.903 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.903 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.903 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.903 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.903 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.903 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.903 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.903 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.162 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.162 "name": "raid_bdev1", 00:17:38.162 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:38.162 "strip_size_kb": 64, 00:17:38.162 "state": "online", 00:17:38.162 "raid_level": "raid5f", 00:17:38.162 "superblock": true, 00:17:38.162 "num_base_bdevs": 3, 00:17:38.162 "num_base_bdevs_discovered": 3, 00:17:38.162 "num_base_bdevs_operational": 3, 00:17:38.162 "process": { 00:17:38.162 "type": "rebuild", 00:17:38.162 "target": "spare", 00:17:38.162 "progress": { 00:17:38.162 "blocks": 20480, 00:17:38.162 "percent": 16 00:17:38.162 } 00:17:38.162 }, 00:17:38.162 "base_bdevs_list": [ 00:17:38.162 { 00:17:38.162 "name": "spare", 00:17:38.162 "uuid": "df213266-abf2-5176-8708-7b97ed9bed64", 00:17:38.162 "is_configured": true, 00:17:38.162 "data_offset": 2048, 00:17:38.162 "data_size": 63488 00:17:38.162 }, 00:17:38.162 { 00:17:38.162 "name": "BaseBdev2", 00:17:38.162 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:38.163 "is_configured": true, 00:17:38.163 "data_offset": 2048, 00:17:38.163 "data_size": 63488 00:17:38.163 }, 00:17:38.163 { 00:17:38.163 "name": "BaseBdev3", 00:17:38.163 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:38.163 "is_configured": true, 00:17:38.163 "data_offset": 2048, 00:17:38.163 "data_size": 63488 00:17:38.163 } 00:17:38.163 ] 00:17:38.163 }' 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:38.163 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=561 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.163 "name": "raid_bdev1", 00:17:38.163 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:38.163 "strip_size_kb": 64, 00:17:38.163 "state": "online", 00:17:38.163 "raid_level": "raid5f", 00:17:38.163 "superblock": true, 00:17:38.163 "num_base_bdevs": 3, 00:17:38.163 "num_base_bdevs_discovered": 3, 00:17:38.163 "num_base_bdevs_operational": 3, 00:17:38.163 "process": { 00:17:38.163 "type": "rebuild", 00:17:38.163 "target": "spare", 00:17:38.163 "progress": { 00:17:38.163 "blocks": 22528, 00:17:38.163 "percent": 17 00:17:38.163 } 00:17:38.163 }, 00:17:38.163 "base_bdevs_list": [ 00:17:38.163 { 00:17:38.163 "name": "spare", 00:17:38.163 "uuid": "df213266-abf2-5176-8708-7b97ed9bed64", 00:17:38.163 "is_configured": true, 00:17:38.163 "data_offset": 2048, 00:17:38.163 "data_size": 63488 00:17:38.163 }, 00:17:38.163 { 00:17:38.163 "name": "BaseBdev2", 00:17:38.163 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:38.163 "is_configured": true, 00:17:38.163 "data_offset": 2048, 00:17:38.163 "data_size": 63488 00:17:38.163 }, 00:17:38.163 { 00:17:38.163 "name": "BaseBdev3", 00:17:38.163 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:38.163 "is_configured": true, 00:17:38.163 "data_offset": 2048, 00:17:38.163 "data_size": 63488 00:17:38.163 } 00:17:38.163 ] 00:17:38.163 }' 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.163 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:39.102 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.102 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.102 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.102 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.102 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.102 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.102 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.102 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.102 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.102 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.361 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.361 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.361 "name": "raid_bdev1", 00:17:39.361 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:39.361 "strip_size_kb": 64, 00:17:39.361 "state": "online", 00:17:39.361 "raid_level": "raid5f", 00:17:39.361 "superblock": true, 00:17:39.361 "num_base_bdevs": 3, 00:17:39.361 "num_base_bdevs_discovered": 3, 00:17:39.361 "num_base_bdevs_operational": 3, 00:17:39.361 "process": { 00:17:39.361 "type": "rebuild", 00:17:39.361 "target": "spare", 00:17:39.361 "progress": { 00:17:39.361 "blocks": 45056, 00:17:39.361 "percent": 35 00:17:39.361 } 00:17:39.361 }, 00:17:39.361 "base_bdevs_list": [ 00:17:39.361 { 00:17:39.361 "name": "spare", 00:17:39.361 "uuid": "df213266-abf2-5176-8708-7b97ed9bed64", 00:17:39.361 "is_configured": true, 00:17:39.361 "data_offset": 2048, 00:17:39.361 "data_size": 63488 00:17:39.361 }, 00:17:39.361 { 00:17:39.361 "name": "BaseBdev2", 00:17:39.361 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:39.361 "is_configured": true, 00:17:39.361 "data_offset": 2048, 00:17:39.361 "data_size": 63488 00:17:39.361 }, 00:17:39.361 { 00:17:39.361 "name": "BaseBdev3", 00:17:39.361 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:39.361 "is_configured": true, 00:17:39.361 "data_offset": 2048, 00:17:39.361 "data_size": 63488 00:17:39.361 } 00:17:39.361 ] 00:17:39.361 }' 00:17:39.361 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.361 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.361 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.361 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.361 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:40.298 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.298 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.298 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.298 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.298 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.298 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.298 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.298 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.298 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.298 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.298 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.298 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.298 "name": "raid_bdev1", 00:17:40.298 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:40.298 "strip_size_kb": 64, 00:17:40.298 "state": "online", 00:17:40.298 "raid_level": "raid5f", 00:17:40.298 "superblock": true, 00:17:40.298 "num_base_bdevs": 3, 00:17:40.298 "num_base_bdevs_discovered": 3, 00:17:40.298 "num_base_bdevs_operational": 3, 00:17:40.298 "process": { 00:17:40.298 "type": "rebuild", 00:17:40.298 "target": "spare", 00:17:40.298 "progress": { 00:17:40.298 "blocks": 67584, 00:17:40.298 "percent": 53 00:17:40.298 } 00:17:40.298 }, 00:17:40.298 "base_bdevs_list": [ 00:17:40.298 { 00:17:40.298 "name": "spare", 00:17:40.298 "uuid": "df213266-abf2-5176-8708-7b97ed9bed64", 00:17:40.298 "is_configured": true, 00:17:40.298 "data_offset": 2048, 00:17:40.298 "data_size": 63488 00:17:40.298 }, 00:17:40.298 { 00:17:40.298 "name": "BaseBdev2", 00:17:40.298 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:40.298 "is_configured": true, 00:17:40.298 "data_offset": 2048, 00:17:40.298 "data_size": 63488 00:17:40.298 }, 00:17:40.298 { 00:17:40.298 "name": "BaseBdev3", 00:17:40.298 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:40.298 "is_configured": true, 00:17:40.298 "data_offset": 2048, 00:17:40.298 "data_size": 63488 00:17:40.298 } 00:17:40.298 ] 00:17:40.298 }' 00:17:40.298 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.557 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.557 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.557 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.557 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.495 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.495 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.495 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.495 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.495 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.495 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.495 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.495 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.495 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.495 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.495 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.495 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.495 "name": "raid_bdev1", 00:17:41.495 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:41.495 "strip_size_kb": 64, 00:17:41.495 "state": "online", 00:17:41.495 "raid_level": "raid5f", 00:17:41.495 "superblock": true, 00:17:41.495 "num_base_bdevs": 3, 00:17:41.495 "num_base_bdevs_discovered": 3, 00:17:41.495 "num_base_bdevs_operational": 3, 00:17:41.495 "process": { 00:17:41.495 "type": "rebuild", 00:17:41.495 "target": "spare", 00:17:41.495 "progress": { 00:17:41.495 "blocks": 90112, 00:17:41.495 "percent": 70 00:17:41.495 } 00:17:41.495 }, 00:17:41.495 "base_bdevs_list": [ 00:17:41.495 { 00:17:41.495 "name": "spare", 00:17:41.495 "uuid": "df213266-abf2-5176-8708-7b97ed9bed64", 00:17:41.495 "is_configured": true, 00:17:41.495 "data_offset": 2048, 00:17:41.495 "data_size": 63488 00:17:41.495 }, 00:17:41.495 { 00:17:41.495 "name": "BaseBdev2", 00:17:41.495 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:41.495 "is_configured": true, 00:17:41.495 "data_offset": 2048, 00:17:41.495 "data_size": 63488 00:17:41.495 }, 00:17:41.495 { 00:17:41.495 "name": "BaseBdev3", 00:17:41.495 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:41.495 "is_configured": true, 00:17:41.495 "data_offset": 2048, 00:17:41.495 "data_size": 63488 00:17:41.495 } 00:17:41.495 ] 00:17:41.495 }' 00:17:41.495 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.495 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.495 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.754 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.754 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:42.691 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:42.691 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.691 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.691 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.691 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.691 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.691 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.691 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.691 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.691 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.691 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.691 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.691 "name": "raid_bdev1", 00:17:42.691 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:42.691 "strip_size_kb": 64, 00:17:42.691 "state": "online", 00:17:42.691 "raid_level": "raid5f", 00:17:42.691 "superblock": true, 00:17:42.691 "num_base_bdevs": 3, 00:17:42.691 "num_base_bdevs_discovered": 3, 00:17:42.691 "num_base_bdevs_operational": 3, 00:17:42.691 "process": { 00:17:42.691 "type": "rebuild", 00:17:42.691 "target": "spare", 00:17:42.691 "progress": { 00:17:42.691 "blocks": 114688, 00:17:42.691 "percent": 90 00:17:42.691 } 00:17:42.691 }, 00:17:42.691 "base_bdevs_list": [ 00:17:42.691 { 00:17:42.691 "name": "spare", 00:17:42.691 "uuid": "df213266-abf2-5176-8708-7b97ed9bed64", 00:17:42.691 "is_configured": true, 00:17:42.691 "data_offset": 2048, 00:17:42.691 "data_size": 63488 00:17:42.691 }, 00:17:42.691 { 00:17:42.691 "name": "BaseBdev2", 00:17:42.691 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:42.691 "is_configured": true, 00:17:42.691 "data_offset": 2048, 00:17:42.691 "data_size": 63488 00:17:42.691 }, 00:17:42.691 { 00:17:42.691 "name": "BaseBdev3", 00:17:42.691 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:42.691 "is_configured": true, 00:17:42.691 "data_offset": 2048, 00:17:42.691 "data_size": 63488 00:17:42.691 } 00:17:42.691 ] 00:17:42.691 }' 00:17:42.691 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.691 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.691 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.691 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.691 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:43.267 [2024-12-05 20:10:44.527894] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:43.267 [2024-12-05 20:10:44.527969] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:43.267 [2024-12-05 20:10:44.528072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.834 "name": "raid_bdev1", 00:17:43.834 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:43.834 "strip_size_kb": 64, 00:17:43.834 "state": "online", 00:17:43.834 "raid_level": "raid5f", 00:17:43.834 "superblock": true, 00:17:43.834 "num_base_bdevs": 3, 00:17:43.834 "num_base_bdevs_discovered": 3, 00:17:43.834 "num_base_bdevs_operational": 3, 00:17:43.834 "base_bdevs_list": [ 00:17:43.834 { 00:17:43.834 "name": "spare", 00:17:43.834 "uuid": "df213266-abf2-5176-8708-7b97ed9bed64", 00:17:43.834 "is_configured": true, 00:17:43.834 "data_offset": 2048, 00:17:43.834 "data_size": 63488 00:17:43.834 }, 00:17:43.834 { 00:17:43.834 "name": "BaseBdev2", 00:17:43.834 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:43.834 "is_configured": true, 00:17:43.834 "data_offset": 2048, 00:17:43.834 "data_size": 63488 00:17:43.834 }, 00:17:43.834 { 00:17:43.834 "name": "BaseBdev3", 00:17:43.834 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:43.834 "is_configured": true, 00:17:43.834 "data_offset": 2048, 00:17:43.834 "data_size": 63488 00:17:43.834 } 00:17:43.834 ] 00:17:43.834 }' 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.834 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.092 "name": "raid_bdev1", 00:17:44.092 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:44.092 "strip_size_kb": 64, 00:17:44.092 "state": "online", 00:17:44.092 "raid_level": "raid5f", 00:17:44.092 "superblock": true, 00:17:44.092 "num_base_bdevs": 3, 00:17:44.092 "num_base_bdevs_discovered": 3, 00:17:44.092 "num_base_bdevs_operational": 3, 00:17:44.092 "base_bdevs_list": [ 00:17:44.092 { 00:17:44.092 "name": "spare", 00:17:44.092 "uuid": "df213266-abf2-5176-8708-7b97ed9bed64", 00:17:44.092 "is_configured": true, 00:17:44.092 "data_offset": 2048, 00:17:44.092 "data_size": 63488 00:17:44.092 }, 00:17:44.092 { 00:17:44.092 "name": "BaseBdev2", 00:17:44.092 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:44.092 "is_configured": true, 00:17:44.092 "data_offset": 2048, 00:17:44.092 "data_size": 63488 00:17:44.092 }, 00:17:44.092 { 00:17:44.092 "name": "BaseBdev3", 00:17:44.092 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:44.092 "is_configured": true, 00:17:44.092 "data_offset": 2048, 00:17:44.092 "data_size": 63488 00:17:44.092 } 00:17:44.092 ] 00:17:44.092 }' 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.092 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.092 "name": "raid_bdev1", 00:17:44.092 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:44.092 "strip_size_kb": 64, 00:17:44.092 "state": "online", 00:17:44.092 "raid_level": "raid5f", 00:17:44.092 "superblock": true, 00:17:44.092 "num_base_bdevs": 3, 00:17:44.092 "num_base_bdevs_discovered": 3, 00:17:44.092 "num_base_bdevs_operational": 3, 00:17:44.092 "base_bdevs_list": [ 00:17:44.092 { 00:17:44.092 "name": "spare", 00:17:44.092 "uuid": "df213266-abf2-5176-8708-7b97ed9bed64", 00:17:44.092 "is_configured": true, 00:17:44.092 "data_offset": 2048, 00:17:44.092 "data_size": 63488 00:17:44.092 }, 00:17:44.092 { 00:17:44.092 "name": "BaseBdev2", 00:17:44.092 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:44.092 "is_configured": true, 00:17:44.092 "data_offset": 2048, 00:17:44.092 "data_size": 63488 00:17:44.092 }, 00:17:44.092 { 00:17:44.092 "name": "BaseBdev3", 00:17:44.092 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:44.092 "is_configured": true, 00:17:44.092 "data_offset": 2048, 00:17:44.092 "data_size": 63488 00:17:44.093 } 00:17:44.093 ] 00:17:44.093 }' 00:17:44.093 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.093 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.659 [2024-12-05 20:10:45.836071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.659 [2024-12-05 20:10:45.836103] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:44.659 [2024-12-05 20:10:45.836184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.659 [2024-12-05 20:10:45.836266] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.659 [2024-12-05 20:10:45.836302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:44.659 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:44.659 /dev/nbd0 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:44.918 1+0 records in 00:17:44.918 1+0 records out 00:17:44.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316609 s, 12.9 MB/s 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:44.918 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:44.918 /dev/nbd1 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:45.177 1+0 records in 00:17:45.177 1+0 records out 00:17:45.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417765 s, 9.8 MB/s 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:45.177 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:45.445 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:45.445 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:45.445 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:45.445 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:45.445 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:45.445 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:45.445 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:45.445 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:45.445 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:45.445 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.749 [2024-12-05 20:10:46.989227] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:45.749 [2024-12-05 20:10:46.989292] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.749 [2024-12-05 20:10:46.989315] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:45.749 [2024-12-05 20:10:46.989325] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.749 [2024-12-05 20:10:46.991571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.749 [2024-12-05 20:10:46.991608] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:45.749 [2024-12-05 20:10:46.991696] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:45.749 [2024-12-05 20:10:46.991745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.749 [2024-12-05 20:10:46.991883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:45.749 [2024-12-05 20:10:46.992001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:45.749 spare 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.749 20:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.749 [2024-12-05 20:10:47.091918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:45.749 [2024-12-05 20:10:47.091949] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:45.749 [2024-12-05 20:10:47.092224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:17:45.749 [2024-12-05 20:10:47.097531] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:45.749 [2024-12-05 20:10:47.097553] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:45.749 [2024-12-05 20:10:47.097742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.749 "name": "raid_bdev1", 00:17:45.749 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:45.749 "strip_size_kb": 64, 00:17:45.749 "state": "online", 00:17:45.749 "raid_level": "raid5f", 00:17:45.749 "superblock": true, 00:17:45.749 "num_base_bdevs": 3, 00:17:45.749 "num_base_bdevs_discovered": 3, 00:17:45.749 "num_base_bdevs_operational": 3, 00:17:45.749 "base_bdevs_list": [ 00:17:45.749 { 00:17:45.749 "name": "spare", 00:17:45.749 "uuid": "df213266-abf2-5176-8708-7b97ed9bed64", 00:17:45.749 "is_configured": true, 00:17:45.749 "data_offset": 2048, 00:17:45.749 "data_size": 63488 00:17:45.749 }, 00:17:45.749 { 00:17:45.749 "name": "BaseBdev2", 00:17:45.749 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:45.749 "is_configured": true, 00:17:45.749 "data_offset": 2048, 00:17:45.749 "data_size": 63488 00:17:45.749 }, 00:17:45.749 { 00:17:45.749 "name": "BaseBdev3", 00:17:45.749 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:45.749 "is_configured": true, 00:17:45.749 "data_offset": 2048, 00:17:45.749 "data_size": 63488 00:17:45.749 } 00:17:45.749 ] 00:17:45.749 }' 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.749 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.342 "name": "raid_bdev1", 00:17:46.342 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:46.342 "strip_size_kb": 64, 00:17:46.342 "state": "online", 00:17:46.342 "raid_level": "raid5f", 00:17:46.342 "superblock": true, 00:17:46.342 "num_base_bdevs": 3, 00:17:46.342 "num_base_bdevs_discovered": 3, 00:17:46.342 "num_base_bdevs_operational": 3, 00:17:46.342 "base_bdevs_list": [ 00:17:46.342 { 00:17:46.342 "name": "spare", 00:17:46.342 "uuid": "df213266-abf2-5176-8708-7b97ed9bed64", 00:17:46.342 "is_configured": true, 00:17:46.342 "data_offset": 2048, 00:17:46.342 "data_size": 63488 00:17:46.342 }, 00:17:46.342 { 00:17:46.342 "name": "BaseBdev2", 00:17:46.342 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:46.342 "is_configured": true, 00:17:46.342 "data_offset": 2048, 00:17:46.342 "data_size": 63488 00:17:46.342 }, 00:17:46.342 { 00:17:46.342 "name": "BaseBdev3", 00:17:46.342 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:46.342 "is_configured": true, 00:17:46.342 "data_offset": 2048, 00:17:46.342 "data_size": 63488 00:17:46.342 } 00:17:46.342 ] 00:17:46.342 }' 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.342 [2024-12-05 20:10:47.746817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.342 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.343 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.343 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.343 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.602 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.602 "name": "raid_bdev1", 00:17:46.602 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:46.602 "strip_size_kb": 64, 00:17:46.602 "state": "online", 00:17:46.602 "raid_level": "raid5f", 00:17:46.602 "superblock": true, 00:17:46.602 "num_base_bdevs": 3, 00:17:46.602 "num_base_bdevs_discovered": 2, 00:17:46.602 "num_base_bdevs_operational": 2, 00:17:46.602 "base_bdevs_list": [ 00:17:46.602 { 00:17:46.602 "name": null, 00:17:46.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.602 "is_configured": false, 00:17:46.602 "data_offset": 0, 00:17:46.602 "data_size": 63488 00:17:46.602 }, 00:17:46.602 { 00:17:46.602 "name": "BaseBdev2", 00:17:46.602 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:46.602 "is_configured": true, 00:17:46.602 "data_offset": 2048, 00:17:46.602 "data_size": 63488 00:17:46.602 }, 00:17:46.602 { 00:17:46.602 "name": "BaseBdev3", 00:17:46.602 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:46.602 "is_configured": true, 00:17:46.602 "data_offset": 2048, 00:17:46.602 "data_size": 63488 00:17:46.602 } 00:17:46.602 ] 00:17:46.602 }' 00:17:46.602 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.602 20:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.862 20:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:46.862 20:10:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.862 20:10:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.862 [2024-12-05 20:10:48.218030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:46.862 [2024-12-05 20:10:48.218216] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:46.862 [2024-12-05 20:10:48.218240] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:46.862 [2024-12-05 20:10:48.218272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:46.862 [2024-12-05 20:10:48.233664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:17:46.862 20:10:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.862 20:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:46.862 [2024-12-05 20:10:48.240759] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:48.239 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.239 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.239 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.240 "name": "raid_bdev1", 00:17:48.240 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:48.240 "strip_size_kb": 64, 00:17:48.240 "state": "online", 00:17:48.240 "raid_level": "raid5f", 00:17:48.240 "superblock": true, 00:17:48.240 "num_base_bdevs": 3, 00:17:48.240 "num_base_bdevs_discovered": 3, 00:17:48.240 "num_base_bdevs_operational": 3, 00:17:48.240 "process": { 00:17:48.240 "type": "rebuild", 00:17:48.240 "target": "spare", 00:17:48.240 "progress": { 00:17:48.240 "blocks": 20480, 00:17:48.240 "percent": 16 00:17:48.240 } 00:17:48.240 }, 00:17:48.240 "base_bdevs_list": [ 00:17:48.240 { 00:17:48.240 "name": "spare", 00:17:48.240 "uuid": "df213266-abf2-5176-8708-7b97ed9bed64", 00:17:48.240 "is_configured": true, 00:17:48.240 "data_offset": 2048, 00:17:48.240 "data_size": 63488 00:17:48.240 }, 00:17:48.240 { 00:17:48.240 "name": "BaseBdev2", 00:17:48.240 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:48.240 "is_configured": true, 00:17:48.240 "data_offset": 2048, 00:17:48.240 "data_size": 63488 00:17:48.240 }, 00:17:48.240 { 00:17:48.240 "name": "BaseBdev3", 00:17:48.240 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:48.240 "is_configured": true, 00:17:48.240 "data_offset": 2048, 00:17:48.240 "data_size": 63488 00:17:48.240 } 00:17:48.240 ] 00:17:48.240 }' 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.240 [2024-12-05 20:10:49.383900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.240 [2024-12-05 20:10:49.448772] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:48.240 [2024-12-05 20:10:49.448842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.240 [2024-12-05 20:10:49.448860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.240 [2024-12-05 20:10:49.448871] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.240 "name": "raid_bdev1", 00:17:48.240 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:48.240 "strip_size_kb": 64, 00:17:48.240 "state": "online", 00:17:48.240 "raid_level": "raid5f", 00:17:48.240 "superblock": true, 00:17:48.240 "num_base_bdevs": 3, 00:17:48.240 "num_base_bdevs_discovered": 2, 00:17:48.240 "num_base_bdevs_operational": 2, 00:17:48.240 "base_bdevs_list": [ 00:17:48.240 { 00:17:48.240 "name": null, 00:17:48.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.240 "is_configured": false, 00:17:48.240 "data_offset": 0, 00:17:48.240 "data_size": 63488 00:17:48.240 }, 00:17:48.240 { 00:17:48.240 "name": "BaseBdev2", 00:17:48.240 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:48.240 "is_configured": true, 00:17:48.240 "data_offset": 2048, 00:17:48.240 "data_size": 63488 00:17:48.240 }, 00:17:48.240 { 00:17:48.240 "name": "BaseBdev3", 00:17:48.240 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:48.240 "is_configured": true, 00:17:48.240 "data_offset": 2048, 00:17:48.240 "data_size": 63488 00:17:48.240 } 00:17:48.240 ] 00:17:48.240 }' 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.240 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.499 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:48.499 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.499 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.499 [2024-12-05 20:10:49.900746] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:48.499 [2024-12-05 20:10:49.900820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.499 [2024-12-05 20:10:49.900840] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:17:48.499 [2024-12-05 20:10:49.900852] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.499 [2024-12-05 20:10:49.901417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.499 [2024-12-05 20:10:49.901450] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:48.499 [2024-12-05 20:10:49.901548] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:48.499 [2024-12-05 20:10:49.901575] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:48.499 [2024-12-05 20:10:49.901587] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:48.499 [2024-12-05 20:10:49.901612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:48.499 [2024-12-05 20:10:49.916952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:17:48.499 spare 00:17:48.499 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.499 20:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:48.499 [2024-12-05 20:10:49.923983] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:49.879 20:10:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.879 20:10:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.879 20:10:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:49.879 20:10:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:49.879 20:10:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.879 20:10:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.879 20:10:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.879 20:10:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.879 20:10:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.879 20:10:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.879 20:10:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.879 "name": "raid_bdev1", 00:17:49.879 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:49.879 "strip_size_kb": 64, 00:17:49.879 "state": "online", 00:17:49.879 "raid_level": "raid5f", 00:17:49.879 "superblock": true, 00:17:49.879 "num_base_bdevs": 3, 00:17:49.879 "num_base_bdevs_discovered": 3, 00:17:49.879 "num_base_bdevs_operational": 3, 00:17:49.879 "process": { 00:17:49.879 "type": "rebuild", 00:17:49.879 "target": "spare", 00:17:49.879 "progress": { 00:17:49.879 "blocks": 20480, 00:17:49.879 "percent": 16 00:17:49.879 } 00:17:49.879 }, 00:17:49.879 "base_bdevs_list": [ 00:17:49.879 { 00:17:49.879 "name": "spare", 00:17:49.879 "uuid": "df213266-abf2-5176-8708-7b97ed9bed64", 00:17:49.879 "is_configured": true, 00:17:49.879 "data_offset": 2048, 00:17:49.879 "data_size": 63488 00:17:49.879 }, 00:17:49.879 { 00:17:49.879 "name": "BaseBdev2", 00:17:49.879 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:49.879 "is_configured": true, 00:17:49.879 "data_offset": 2048, 00:17:49.879 "data_size": 63488 00:17:49.879 }, 00:17:49.879 { 00:17:49.879 "name": "BaseBdev3", 00:17:49.879 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:49.879 "is_configured": true, 00:17:49.879 "data_offset": 2048, 00:17:49.879 "data_size": 63488 00:17:49.879 } 00:17:49.879 ] 00:17:49.879 }' 00:17:49.879 20:10:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.879 [2024-12-05 20:10:51.082969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.879 [2024-12-05 20:10:51.131595] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:49.879 [2024-12-05 20:10:51.131643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.879 [2024-12-05 20:10:51.131676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.879 [2024-12-05 20:10:51.131683] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.879 "name": "raid_bdev1", 00:17:49.879 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:49.879 "strip_size_kb": 64, 00:17:49.879 "state": "online", 00:17:49.879 "raid_level": "raid5f", 00:17:49.879 "superblock": true, 00:17:49.879 "num_base_bdevs": 3, 00:17:49.879 "num_base_bdevs_discovered": 2, 00:17:49.879 "num_base_bdevs_operational": 2, 00:17:49.879 "base_bdevs_list": [ 00:17:49.879 { 00:17:49.879 "name": null, 00:17:49.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.879 "is_configured": false, 00:17:49.879 "data_offset": 0, 00:17:49.879 "data_size": 63488 00:17:49.879 }, 00:17:49.879 { 00:17:49.879 "name": "BaseBdev2", 00:17:49.879 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:49.879 "is_configured": true, 00:17:49.879 "data_offset": 2048, 00:17:49.879 "data_size": 63488 00:17:49.879 }, 00:17:49.879 { 00:17:49.879 "name": "BaseBdev3", 00:17:49.879 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:49.879 "is_configured": true, 00:17:49.879 "data_offset": 2048, 00:17:49.879 "data_size": 63488 00:17:49.879 } 00:17:49.879 ] 00:17:49.879 }' 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.879 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.138 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:50.138 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.138 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:50.138 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:50.138 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.138 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.138 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.138 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.138 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.138 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.397 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.397 "name": "raid_bdev1", 00:17:50.397 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:50.397 "strip_size_kb": 64, 00:17:50.397 "state": "online", 00:17:50.397 "raid_level": "raid5f", 00:17:50.397 "superblock": true, 00:17:50.397 "num_base_bdevs": 3, 00:17:50.397 "num_base_bdevs_discovered": 2, 00:17:50.397 "num_base_bdevs_operational": 2, 00:17:50.397 "base_bdevs_list": [ 00:17:50.397 { 00:17:50.397 "name": null, 00:17:50.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.397 "is_configured": false, 00:17:50.397 "data_offset": 0, 00:17:50.397 "data_size": 63488 00:17:50.397 }, 00:17:50.397 { 00:17:50.397 "name": "BaseBdev2", 00:17:50.397 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:50.397 "is_configured": true, 00:17:50.397 "data_offset": 2048, 00:17:50.397 "data_size": 63488 00:17:50.397 }, 00:17:50.397 { 00:17:50.397 "name": "BaseBdev3", 00:17:50.397 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:50.397 "is_configured": true, 00:17:50.397 "data_offset": 2048, 00:17:50.397 "data_size": 63488 00:17:50.397 } 00:17:50.397 ] 00:17:50.397 }' 00:17:50.397 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.397 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:50.397 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.397 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:50.397 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:50.397 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.397 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.397 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.397 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:50.397 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.397 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.397 [2024-12-05 20:10:51.684552] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:50.397 [2024-12-05 20:10:51.684607] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.397 [2024-12-05 20:10:51.684639] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:50.397 [2024-12-05 20:10:51.684648] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.397 [2024-12-05 20:10:51.685162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.397 [2024-12-05 20:10:51.685192] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:50.397 [2024-12-05 20:10:51.685281] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:50.397 [2024-12-05 20:10:51.685307] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:50.397 [2024-12-05 20:10:51.685329] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:50.397 [2024-12-05 20:10:51.685340] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:50.397 BaseBdev1 00:17:50.397 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.397 20:10:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.335 "name": "raid_bdev1", 00:17:51.335 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:51.335 "strip_size_kb": 64, 00:17:51.335 "state": "online", 00:17:51.335 "raid_level": "raid5f", 00:17:51.335 "superblock": true, 00:17:51.335 "num_base_bdevs": 3, 00:17:51.335 "num_base_bdevs_discovered": 2, 00:17:51.335 "num_base_bdevs_operational": 2, 00:17:51.335 "base_bdevs_list": [ 00:17:51.335 { 00:17:51.335 "name": null, 00:17:51.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.335 "is_configured": false, 00:17:51.335 "data_offset": 0, 00:17:51.335 "data_size": 63488 00:17:51.335 }, 00:17:51.335 { 00:17:51.335 "name": "BaseBdev2", 00:17:51.335 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:51.335 "is_configured": true, 00:17:51.335 "data_offset": 2048, 00:17:51.335 "data_size": 63488 00:17:51.335 }, 00:17:51.335 { 00:17:51.335 "name": "BaseBdev3", 00:17:51.335 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:51.335 "is_configured": true, 00:17:51.335 "data_offset": 2048, 00:17:51.335 "data_size": 63488 00:17:51.335 } 00:17:51.335 ] 00:17:51.335 }' 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.335 20:10:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.905 "name": "raid_bdev1", 00:17:51.905 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:51.905 "strip_size_kb": 64, 00:17:51.905 "state": "online", 00:17:51.905 "raid_level": "raid5f", 00:17:51.905 "superblock": true, 00:17:51.905 "num_base_bdevs": 3, 00:17:51.905 "num_base_bdevs_discovered": 2, 00:17:51.905 "num_base_bdevs_operational": 2, 00:17:51.905 "base_bdevs_list": [ 00:17:51.905 { 00:17:51.905 "name": null, 00:17:51.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.905 "is_configured": false, 00:17:51.905 "data_offset": 0, 00:17:51.905 "data_size": 63488 00:17:51.905 }, 00:17:51.905 { 00:17:51.905 "name": "BaseBdev2", 00:17:51.905 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:51.905 "is_configured": true, 00:17:51.905 "data_offset": 2048, 00:17:51.905 "data_size": 63488 00:17:51.905 }, 00:17:51.905 { 00:17:51.905 "name": "BaseBdev3", 00:17:51.905 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:51.905 "is_configured": true, 00:17:51.905 "data_offset": 2048, 00:17:51.905 "data_size": 63488 00:17:51.905 } 00:17:51.905 ] 00:17:51.905 }' 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.905 [2024-12-05 20:10:53.281876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:51.905 [2024-12-05 20:10:53.282069] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:51.905 [2024-12-05 20:10:53.282084] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:51.905 request: 00:17:51.905 { 00:17:51.905 "base_bdev": "BaseBdev1", 00:17:51.905 "raid_bdev": "raid_bdev1", 00:17:51.905 "method": "bdev_raid_add_base_bdev", 00:17:51.905 "req_id": 1 00:17:51.905 } 00:17:51.905 Got JSON-RPC error response 00:17:51.905 response: 00:17:51.905 { 00:17:51.905 "code": -22, 00:17:51.905 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:51.905 } 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:51.905 20:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.284 "name": "raid_bdev1", 00:17:53.284 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:53.284 "strip_size_kb": 64, 00:17:53.284 "state": "online", 00:17:53.284 "raid_level": "raid5f", 00:17:53.284 "superblock": true, 00:17:53.284 "num_base_bdevs": 3, 00:17:53.284 "num_base_bdevs_discovered": 2, 00:17:53.284 "num_base_bdevs_operational": 2, 00:17:53.284 "base_bdevs_list": [ 00:17:53.284 { 00:17:53.284 "name": null, 00:17:53.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.284 "is_configured": false, 00:17:53.284 "data_offset": 0, 00:17:53.284 "data_size": 63488 00:17:53.284 }, 00:17:53.284 { 00:17:53.284 "name": "BaseBdev2", 00:17:53.284 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:53.284 "is_configured": true, 00:17:53.284 "data_offset": 2048, 00:17:53.284 "data_size": 63488 00:17:53.284 }, 00:17:53.284 { 00:17:53.284 "name": "BaseBdev3", 00:17:53.284 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:53.284 "is_configured": true, 00:17:53.284 "data_offset": 2048, 00:17:53.284 "data_size": 63488 00:17:53.284 } 00:17:53.284 ] 00:17:53.284 }' 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:53.284 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.542 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.542 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.542 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.542 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.542 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.542 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.542 "name": "raid_bdev1", 00:17:53.542 "uuid": "3808e8bb-f17e-4d3a-941d-3eb730bda73b", 00:17:53.542 "strip_size_kb": 64, 00:17:53.542 "state": "online", 00:17:53.542 "raid_level": "raid5f", 00:17:53.542 "superblock": true, 00:17:53.542 "num_base_bdevs": 3, 00:17:53.542 "num_base_bdevs_discovered": 2, 00:17:53.542 "num_base_bdevs_operational": 2, 00:17:53.542 "base_bdevs_list": [ 00:17:53.542 { 00:17:53.542 "name": null, 00:17:53.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.542 "is_configured": false, 00:17:53.543 "data_offset": 0, 00:17:53.543 "data_size": 63488 00:17:53.543 }, 00:17:53.543 { 00:17:53.543 "name": "BaseBdev2", 00:17:53.543 "uuid": "0062d471-b049-535a-b778-c9b99f718a92", 00:17:53.543 "is_configured": true, 00:17:53.543 "data_offset": 2048, 00:17:53.543 "data_size": 63488 00:17:53.543 }, 00:17:53.543 { 00:17:53.543 "name": "BaseBdev3", 00:17:53.543 "uuid": "8c4c3b91-25ee-58ac-88e8-809b9c3449ea", 00:17:53.543 "is_configured": true, 00:17:53.543 "data_offset": 2048, 00:17:53.543 "data_size": 63488 00:17:53.543 } 00:17:53.543 ] 00:17:53.543 }' 00:17:53.543 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.543 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:53.543 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.543 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:53.543 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82099 00:17:53.543 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82099 ']' 00:17:53.543 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82099 00:17:53.543 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:53.543 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.543 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82099 00:17:53.543 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.543 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.543 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82099' 00:17:53.543 killing process with pid 82099 00:17:53.543 Received shutdown signal, test time was about 60.000000 seconds 00:17:53.543 00:17:53.543 Latency(us) 00:17:53.543 [2024-12-05T20:10:54.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.543 [2024-12-05T20:10:54.980Z] =================================================================================================================== 00:17:53.543 [2024-12-05T20:10:54.980Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:53.543 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82099 00:17:53.543 [2024-12-05 20:10:54.894988] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:53.543 [2024-12-05 20:10:54.895116] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.543 20:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82099 00:17:53.543 [2024-12-05 20:10:54.895190] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.543 [2024-12-05 20:10:54.895204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:54.109 [2024-12-05 20:10:55.271306] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.047 20:10:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:55.047 00:17:55.047 real 0m22.952s 00:17:55.047 user 0m29.381s 00:17:55.047 sys 0m2.680s 00:17:55.047 ************************************ 00:17:55.047 END TEST raid5f_rebuild_test_sb 00:17:55.047 ************************************ 00:17:55.047 20:10:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.047 20:10:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.047 20:10:56 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:55.047 20:10:56 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:17:55.047 20:10:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:55.047 20:10:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.047 20:10:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.047 ************************************ 00:17:55.047 START TEST raid5f_state_function_test 00:17:55.047 ************************************ 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82848 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82848' 00:17:55.047 Process raid pid: 82848 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82848 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82848 ']' 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.047 20:10:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.306 [2024-12-05 20:10:56.491018] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:17:55.306 [2024-12-05 20:10:56.491214] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.306 [2024-12-05 20:10:56.664168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.566 [2024-12-05 20:10:56.769958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.566 [2024-12-05 20:10:56.952690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.566 [2024-12-05 20:10:56.952804] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.134 [2024-12-05 20:10:57.313240] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:56.134 [2024-12-05 20:10:57.313296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:56.134 [2024-12-05 20:10:57.313306] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:56.134 [2024-12-05 20:10:57.313316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:56.134 [2024-12-05 20:10:57.313323] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:56.134 [2024-12-05 20:10:57.313331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:56.134 [2024-12-05 20:10:57.313337] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:56.134 [2024-12-05 20:10:57.313346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.134 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.134 "name": "Existed_Raid", 00:17:56.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.135 "strip_size_kb": 64, 00:17:56.135 "state": "configuring", 00:17:56.135 "raid_level": "raid5f", 00:17:56.135 "superblock": false, 00:17:56.135 "num_base_bdevs": 4, 00:17:56.135 "num_base_bdevs_discovered": 0, 00:17:56.135 "num_base_bdevs_operational": 4, 00:17:56.135 "base_bdevs_list": [ 00:17:56.135 { 00:17:56.135 "name": "BaseBdev1", 00:17:56.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.135 "is_configured": false, 00:17:56.135 "data_offset": 0, 00:17:56.135 "data_size": 0 00:17:56.135 }, 00:17:56.135 { 00:17:56.135 "name": "BaseBdev2", 00:17:56.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.135 "is_configured": false, 00:17:56.135 "data_offset": 0, 00:17:56.135 "data_size": 0 00:17:56.135 }, 00:17:56.135 { 00:17:56.135 "name": "BaseBdev3", 00:17:56.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.135 "is_configured": false, 00:17:56.135 "data_offset": 0, 00:17:56.135 "data_size": 0 00:17:56.135 }, 00:17:56.135 { 00:17:56.135 "name": "BaseBdev4", 00:17:56.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.135 "is_configured": false, 00:17:56.135 "data_offset": 0, 00:17:56.135 "data_size": 0 00:17:56.135 } 00:17:56.135 ] 00:17:56.135 }' 00:17:56.135 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.135 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.395 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:56.395 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.395 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.395 [2024-12-05 20:10:57.712498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.395 [2024-12-05 20:10:57.712534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:56.395 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.395 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:56.395 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.395 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.395 [2024-12-05 20:10:57.720495] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:56.395 [2024-12-05 20:10:57.720584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:56.395 [2024-12-05 20:10:57.720612] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:56.395 [2024-12-05 20:10:57.720642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:56.395 [2024-12-05 20:10:57.720660] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:56.395 [2024-12-05 20:10:57.720680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:56.395 [2024-12-05 20:10:57.720698] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:56.395 [2024-12-05 20:10:57.720718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:56.395 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.395 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:56.395 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.395 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.395 [2024-12-05 20:10:57.762990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.395 BaseBdev1 00:17:56.395 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.395 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:56.395 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:56.395 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:56.395 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.396 [ 00:17:56.396 { 00:17:56.396 "name": "BaseBdev1", 00:17:56.396 "aliases": [ 00:17:56.396 "4800d241-59c6-4feb-aa89-b6aefce9f7e2" 00:17:56.396 ], 00:17:56.396 "product_name": "Malloc disk", 00:17:56.396 "block_size": 512, 00:17:56.396 "num_blocks": 65536, 00:17:56.396 "uuid": "4800d241-59c6-4feb-aa89-b6aefce9f7e2", 00:17:56.396 "assigned_rate_limits": { 00:17:56.396 "rw_ios_per_sec": 0, 00:17:56.396 "rw_mbytes_per_sec": 0, 00:17:56.396 "r_mbytes_per_sec": 0, 00:17:56.396 "w_mbytes_per_sec": 0 00:17:56.396 }, 00:17:56.396 "claimed": true, 00:17:56.396 "claim_type": "exclusive_write", 00:17:56.396 "zoned": false, 00:17:56.396 "supported_io_types": { 00:17:56.396 "read": true, 00:17:56.396 "write": true, 00:17:56.396 "unmap": true, 00:17:56.396 "flush": true, 00:17:56.396 "reset": true, 00:17:56.396 "nvme_admin": false, 00:17:56.396 "nvme_io": false, 00:17:56.396 "nvme_io_md": false, 00:17:56.396 "write_zeroes": true, 00:17:56.396 "zcopy": true, 00:17:56.396 "get_zone_info": false, 00:17:56.396 "zone_management": false, 00:17:56.396 "zone_append": false, 00:17:56.396 "compare": false, 00:17:56.396 "compare_and_write": false, 00:17:56.396 "abort": true, 00:17:56.396 "seek_hole": false, 00:17:56.396 "seek_data": false, 00:17:56.396 "copy": true, 00:17:56.396 "nvme_iov_md": false 00:17:56.396 }, 00:17:56.396 "memory_domains": [ 00:17:56.396 { 00:17:56.396 "dma_device_id": "system", 00:17:56.396 "dma_device_type": 1 00:17:56.396 }, 00:17:56.396 { 00:17:56.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.396 "dma_device_type": 2 00:17:56.396 } 00:17:56.396 ], 00:17:56.396 "driver_specific": {} 00:17:56.396 } 00:17:56.396 ] 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.396 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.656 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.656 "name": "Existed_Raid", 00:17:56.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.656 "strip_size_kb": 64, 00:17:56.656 "state": "configuring", 00:17:56.656 "raid_level": "raid5f", 00:17:56.656 "superblock": false, 00:17:56.656 "num_base_bdevs": 4, 00:17:56.656 "num_base_bdevs_discovered": 1, 00:17:56.656 "num_base_bdevs_operational": 4, 00:17:56.656 "base_bdevs_list": [ 00:17:56.656 { 00:17:56.656 "name": "BaseBdev1", 00:17:56.656 "uuid": "4800d241-59c6-4feb-aa89-b6aefce9f7e2", 00:17:56.656 "is_configured": true, 00:17:56.656 "data_offset": 0, 00:17:56.656 "data_size": 65536 00:17:56.656 }, 00:17:56.656 { 00:17:56.656 "name": "BaseBdev2", 00:17:56.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.656 "is_configured": false, 00:17:56.656 "data_offset": 0, 00:17:56.656 "data_size": 0 00:17:56.656 }, 00:17:56.656 { 00:17:56.656 "name": "BaseBdev3", 00:17:56.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.656 "is_configured": false, 00:17:56.656 "data_offset": 0, 00:17:56.656 "data_size": 0 00:17:56.656 }, 00:17:56.656 { 00:17:56.656 "name": "BaseBdev4", 00:17:56.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.656 "is_configured": false, 00:17:56.656 "data_offset": 0, 00:17:56.656 "data_size": 0 00:17:56.656 } 00:17:56.656 ] 00:17:56.656 }' 00:17:56.656 20:10:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.656 20:10:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.915 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:56.915 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.915 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.915 [2024-12-05 20:10:58.226253] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.915 [2024-12-05 20:10:58.226357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.916 [2024-12-05 20:10:58.234294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.916 [2024-12-05 20:10:58.236003] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:56.916 [2024-12-05 20:10:58.236037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:56.916 [2024-12-05 20:10:58.236046] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:56.916 [2024-12-05 20:10:58.236056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:56.916 [2024-12-05 20:10:58.236063] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:56.916 [2024-12-05 20:10:58.236071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.916 "name": "Existed_Raid", 00:17:56.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.916 "strip_size_kb": 64, 00:17:56.916 "state": "configuring", 00:17:56.916 "raid_level": "raid5f", 00:17:56.916 "superblock": false, 00:17:56.916 "num_base_bdevs": 4, 00:17:56.916 "num_base_bdevs_discovered": 1, 00:17:56.916 "num_base_bdevs_operational": 4, 00:17:56.916 "base_bdevs_list": [ 00:17:56.916 { 00:17:56.916 "name": "BaseBdev1", 00:17:56.916 "uuid": "4800d241-59c6-4feb-aa89-b6aefce9f7e2", 00:17:56.916 "is_configured": true, 00:17:56.916 "data_offset": 0, 00:17:56.916 "data_size": 65536 00:17:56.916 }, 00:17:56.916 { 00:17:56.916 "name": "BaseBdev2", 00:17:56.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.916 "is_configured": false, 00:17:56.916 "data_offset": 0, 00:17:56.916 "data_size": 0 00:17:56.916 }, 00:17:56.916 { 00:17:56.916 "name": "BaseBdev3", 00:17:56.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.916 "is_configured": false, 00:17:56.916 "data_offset": 0, 00:17:56.916 "data_size": 0 00:17:56.916 }, 00:17:56.916 { 00:17:56.916 "name": "BaseBdev4", 00:17:56.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.916 "is_configured": false, 00:17:56.916 "data_offset": 0, 00:17:56.916 "data_size": 0 00:17:56.916 } 00:17:56.916 ] 00:17:56.916 }' 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.916 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.487 [2024-12-05 20:10:58.698902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.487 BaseBdev2 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.487 [ 00:17:57.487 { 00:17:57.487 "name": "BaseBdev2", 00:17:57.487 "aliases": [ 00:17:57.487 "9bc9564b-d217-4154-8372-2b25632ba4ec" 00:17:57.487 ], 00:17:57.487 "product_name": "Malloc disk", 00:17:57.487 "block_size": 512, 00:17:57.487 "num_blocks": 65536, 00:17:57.487 "uuid": "9bc9564b-d217-4154-8372-2b25632ba4ec", 00:17:57.487 "assigned_rate_limits": { 00:17:57.487 "rw_ios_per_sec": 0, 00:17:57.487 "rw_mbytes_per_sec": 0, 00:17:57.487 "r_mbytes_per_sec": 0, 00:17:57.487 "w_mbytes_per_sec": 0 00:17:57.487 }, 00:17:57.487 "claimed": true, 00:17:57.487 "claim_type": "exclusive_write", 00:17:57.487 "zoned": false, 00:17:57.487 "supported_io_types": { 00:17:57.487 "read": true, 00:17:57.487 "write": true, 00:17:57.487 "unmap": true, 00:17:57.487 "flush": true, 00:17:57.487 "reset": true, 00:17:57.487 "nvme_admin": false, 00:17:57.487 "nvme_io": false, 00:17:57.487 "nvme_io_md": false, 00:17:57.487 "write_zeroes": true, 00:17:57.487 "zcopy": true, 00:17:57.487 "get_zone_info": false, 00:17:57.487 "zone_management": false, 00:17:57.487 "zone_append": false, 00:17:57.487 "compare": false, 00:17:57.487 "compare_and_write": false, 00:17:57.487 "abort": true, 00:17:57.487 "seek_hole": false, 00:17:57.487 "seek_data": false, 00:17:57.487 "copy": true, 00:17:57.487 "nvme_iov_md": false 00:17:57.487 }, 00:17:57.487 "memory_domains": [ 00:17:57.487 { 00:17:57.487 "dma_device_id": "system", 00:17:57.487 "dma_device_type": 1 00:17:57.487 }, 00:17:57.487 { 00:17:57.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.487 "dma_device_type": 2 00:17:57.487 } 00:17:57.487 ], 00:17:57.487 "driver_specific": {} 00:17:57.487 } 00:17:57.487 ] 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.487 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.487 "name": "Existed_Raid", 00:17:57.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.487 "strip_size_kb": 64, 00:17:57.487 "state": "configuring", 00:17:57.487 "raid_level": "raid5f", 00:17:57.487 "superblock": false, 00:17:57.487 "num_base_bdevs": 4, 00:17:57.487 "num_base_bdevs_discovered": 2, 00:17:57.487 "num_base_bdevs_operational": 4, 00:17:57.487 "base_bdevs_list": [ 00:17:57.487 { 00:17:57.487 "name": "BaseBdev1", 00:17:57.487 "uuid": "4800d241-59c6-4feb-aa89-b6aefce9f7e2", 00:17:57.487 "is_configured": true, 00:17:57.487 "data_offset": 0, 00:17:57.487 "data_size": 65536 00:17:57.488 }, 00:17:57.488 { 00:17:57.488 "name": "BaseBdev2", 00:17:57.488 "uuid": "9bc9564b-d217-4154-8372-2b25632ba4ec", 00:17:57.488 "is_configured": true, 00:17:57.488 "data_offset": 0, 00:17:57.488 "data_size": 65536 00:17:57.488 }, 00:17:57.488 { 00:17:57.488 "name": "BaseBdev3", 00:17:57.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.488 "is_configured": false, 00:17:57.488 "data_offset": 0, 00:17:57.488 "data_size": 0 00:17:57.488 }, 00:17:57.488 { 00:17:57.488 "name": "BaseBdev4", 00:17:57.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.488 "is_configured": false, 00:17:57.488 "data_offset": 0, 00:17:57.488 "data_size": 0 00:17:57.488 } 00:17:57.488 ] 00:17:57.488 }' 00:17:57.488 20:10:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.488 20:10:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.748 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:57.748 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.748 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.009 [2024-12-05 20:10:59.217605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:58.009 BaseBdev3 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.009 [ 00:17:58.009 { 00:17:58.009 "name": "BaseBdev3", 00:17:58.009 "aliases": [ 00:17:58.009 "86775ebb-3571-4385-8870-7268bbcf368f" 00:17:58.009 ], 00:17:58.009 "product_name": "Malloc disk", 00:17:58.009 "block_size": 512, 00:17:58.009 "num_blocks": 65536, 00:17:58.009 "uuid": "86775ebb-3571-4385-8870-7268bbcf368f", 00:17:58.009 "assigned_rate_limits": { 00:17:58.009 "rw_ios_per_sec": 0, 00:17:58.009 "rw_mbytes_per_sec": 0, 00:17:58.009 "r_mbytes_per_sec": 0, 00:17:58.009 "w_mbytes_per_sec": 0 00:17:58.009 }, 00:17:58.009 "claimed": true, 00:17:58.009 "claim_type": "exclusive_write", 00:17:58.009 "zoned": false, 00:17:58.009 "supported_io_types": { 00:17:58.009 "read": true, 00:17:58.009 "write": true, 00:17:58.009 "unmap": true, 00:17:58.009 "flush": true, 00:17:58.009 "reset": true, 00:17:58.009 "nvme_admin": false, 00:17:58.009 "nvme_io": false, 00:17:58.009 "nvme_io_md": false, 00:17:58.009 "write_zeroes": true, 00:17:58.009 "zcopy": true, 00:17:58.009 "get_zone_info": false, 00:17:58.009 "zone_management": false, 00:17:58.009 "zone_append": false, 00:17:58.009 "compare": false, 00:17:58.009 "compare_and_write": false, 00:17:58.009 "abort": true, 00:17:58.009 "seek_hole": false, 00:17:58.009 "seek_data": false, 00:17:58.009 "copy": true, 00:17:58.009 "nvme_iov_md": false 00:17:58.009 }, 00:17:58.009 "memory_domains": [ 00:17:58.009 { 00:17:58.009 "dma_device_id": "system", 00:17:58.009 "dma_device_type": 1 00:17:58.009 }, 00:17:58.009 { 00:17:58.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.009 "dma_device_type": 2 00:17:58.009 } 00:17:58.009 ], 00:17:58.009 "driver_specific": {} 00:17:58.009 } 00:17:58.009 ] 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.009 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.009 "name": "Existed_Raid", 00:17:58.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.009 "strip_size_kb": 64, 00:17:58.009 "state": "configuring", 00:17:58.009 "raid_level": "raid5f", 00:17:58.009 "superblock": false, 00:17:58.009 "num_base_bdevs": 4, 00:17:58.009 "num_base_bdevs_discovered": 3, 00:17:58.009 "num_base_bdevs_operational": 4, 00:17:58.010 "base_bdevs_list": [ 00:17:58.010 { 00:17:58.010 "name": "BaseBdev1", 00:17:58.010 "uuid": "4800d241-59c6-4feb-aa89-b6aefce9f7e2", 00:17:58.010 "is_configured": true, 00:17:58.010 "data_offset": 0, 00:17:58.010 "data_size": 65536 00:17:58.010 }, 00:17:58.010 { 00:17:58.010 "name": "BaseBdev2", 00:17:58.010 "uuid": "9bc9564b-d217-4154-8372-2b25632ba4ec", 00:17:58.010 "is_configured": true, 00:17:58.010 "data_offset": 0, 00:17:58.010 "data_size": 65536 00:17:58.010 }, 00:17:58.010 { 00:17:58.010 "name": "BaseBdev3", 00:17:58.010 "uuid": "86775ebb-3571-4385-8870-7268bbcf368f", 00:17:58.010 "is_configured": true, 00:17:58.010 "data_offset": 0, 00:17:58.010 "data_size": 65536 00:17:58.010 }, 00:17:58.010 { 00:17:58.010 "name": "BaseBdev4", 00:17:58.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.010 "is_configured": false, 00:17:58.010 "data_offset": 0, 00:17:58.010 "data_size": 0 00:17:58.010 } 00:17:58.010 ] 00:17:58.010 }' 00:17:58.010 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.010 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.269 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:58.269 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.269 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.528 [2024-12-05 20:10:59.715359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:58.528 [2024-12-05 20:10:59.715479] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:58.528 [2024-12-05 20:10:59.715508] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:58.528 [2024-12-05 20:10:59.715807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:58.528 [2024-12-05 20:10:59.722852] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:58.528 [2024-12-05 20:10:59.722940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:58.528 [2024-12-05 20:10:59.723245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.528 BaseBdev4 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.528 [ 00:17:58.528 { 00:17:58.528 "name": "BaseBdev4", 00:17:58.528 "aliases": [ 00:17:58.528 "153fd093-e726-430c-88d4-822d03a91b4b" 00:17:58.528 ], 00:17:58.528 "product_name": "Malloc disk", 00:17:58.528 "block_size": 512, 00:17:58.528 "num_blocks": 65536, 00:17:58.528 "uuid": "153fd093-e726-430c-88d4-822d03a91b4b", 00:17:58.528 "assigned_rate_limits": { 00:17:58.528 "rw_ios_per_sec": 0, 00:17:58.528 "rw_mbytes_per_sec": 0, 00:17:58.528 "r_mbytes_per_sec": 0, 00:17:58.528 "w_mbytes_per_sec": 0 00:17:58.528 }, 00:17:58.528 "claimed": true, 00:17:58.528 "claim_type": "exclusive_write", 00:17:58.528 "zoned": false, 00:17:58.528 "supported_io_types": { 00:17:58.528 "read": true, 00:17:58.528 "write": true, 00:17:58.528 "unmap": true, 00:17:58.528 "flush": true, 00:17:58.528 "reset": true, 00:17:58.528 "nvme_admin": false, 00:17:58.528 "nvme_io": false, 00:17:58.528 "nvme_io_md": false, 00:17:58.528 "write_zeroes": true, 00:17:58.528 "zcopy": true, 00:17:58.528 "get_zone_info": false, 00:17:58.528 "zone_management": false, 00:17:58.528 "zone_append": false, 00:17:58.528 "compare": false, 00:17:58.528 "compare_and_write": false, 00:17:58.528 "abort": true, 00:17:58.528 "seek_hole": false, 00:17:58.528 "seek_data": false, 00:17:58.528 "copy": true, 00:17:58.528 "nvme_iov_md": false 00:17:58.528 }, 00:17:58.528 "memory_domains": [ 00:17:58.528 { 00:17:58.528 "dma_device_id": "system", 00:17:58.528 "dma_device_type": 1 00:17:58.528 }, 00:17:58.528 { 00:17:58.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.528 "dma_device_type": 2 00:17:58.528 } 00:17:58.528 ], 00:17:58.528 "driver_specific": {} 00:17:58.528 } 00:17:58.528 ] 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.528 "name": "Existed_Raid", 00:17:58.528 "uuid": "efe39634-0636-45c1-a20a-cf3b0e403526", 00:17:58.528 "strip_size_kb": 64, 00:17:58.528 "state": "online", 00:17:58.528 "raid_level": "raid5f", 00:17:58.528 "superblock": false, 00:17:58.528 "num_base_bdevs": 4, 00:17:58.528 "num_base_bdevs_discovered": 4, 00:17:58.528 "num_base_bdevs_operational": 4, 00:17:58.528 "base_bdevs_list": [ 00:17:58.528 { 00:17:58.528 "name": "BaseBdev1", 00:17:58.528 "uuid": "4800d241-59c6-4feb-aa89-b6aefce9f7e2", 00:17:58.528 "is_configured": true, 00:17:58.528 "data_offset": 0, 00:17:58.528 "data_size": 65536 00:17:58.528 }, 00:17:58.528 { 00:17:58.528 "name": "BaseBdev2", 00:17:58.528 "uuid": "9bc9564b-d217-4154-8372-2b25632ba4ec", 00:17:58.528 "is_configured": true, 00:17:58.528 "data_offset": 0, 00:17:58.528 "data_size": 65536 00:17:58.528 }, 00:17:58.528 { 00:17:58.528 "name": "BaseBdev3", 00:17:58.528 "uuid": "86775ebb-3571-4385-8870-7268bbcf368f", 00:17:58.528 "is_configured": true, 00:17:58.528 "data_offset": 0, 00:17:58.528 "data_size": 65536 00:17:58.528 }, 00:17:58.528 { 00:17:58.528 "name": "BaseBdev4", 00:17:58.528 "uuid": "153fd093-e726-430c-88d4-822d03a91b4b", 00:17:58.528 "is_configured": true, 00:17:58.528 "data_offset": 0, 00:17:58.528 "data_size": 65536 00:17:58.528 } 00:17:58.528 ] 00:17:58.528 }' 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.528 20:10:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.786 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:58.786 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:58.786 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:58.786 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:58.786 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:58.786 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:58.787 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:58.787 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.787 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.787 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:58.787 [2024-12-05 20:11:00.198761] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.787 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.787 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:58.787 "name": "Existed_Raid", 00:17:58.787 "aliases": [ 00:17:58.787 "efe39634-0636-45c1-a20a-cf3b0e403526" 00:17:58.787 ], 00:17:58.787 "product_name": "Raid Volume", 00:17:58.787 "block_size": 512, 00:17:58.787 "num_blocks": 196608, 00:17:58.787 "uuid": "efe39634-0636-45c1-a20a-cf3b0e403526", 00:17:58.787 "assigned_rate_limits": { 00:17:58.787 "rw_ios_per_sec": 0, 00:17:58.787 "rw_mbytes_per_sec": 0, 00:17:58.787 "r_mbytes_per_sec": 0, 00:17:58.787 "w_mbytes_per_sec": 0 00:17:58.787 }, 00:17:58.787 "claimed": false, 00:17:58.787 "zoned": false, 00:17:58.787 "supported_io_types": { 00:17:58.787 "read": true, 00:17:58.787 "write": true, 00:17:58.787 "unmap": false, 00:17:58.787 "flush": false, 00:17:58.787 "reset": true, 00:17:58.787 "nvme_admin": false, 00:17:58.787 "nvme_io": false, 00:17:58.787 "nvme_io_md": false, 00:17:58.787 "write_zeroes": true, 00:17:58.787 "zcopy": false, 00:17:58.787 "get_zone_info": false, 00:17:58.787 "zone_management": false, 00:17:58.787 "zone_append": false, 00:17:58.787 "compare": false, 00:17:58.787 "compare_and_write": false, 00:17:58.787 "abort": false, 00:17:58.787 "seek_hole": false, 00:17:58.787 "seek_data": false, 00:17:58.787 "copy": false, 00:17:58.787 "nvme_iov_md": false 00:17:58.787 }, 00:17:58.787 "driver_specific": { 00:17:58.787 "raid": { 00:17:58.787 "uuid": "efe39634-0636-45c1-a20a-cf3b0e403526", 00:17:58.787 "strip_size_kb": 64, 00:17:58.787 "state": "online", 00:17:58.787 "raid_level": "raid5f", 00:17:58.787 "superblock": false, 00:17:58.787 "num_base_bdevs": 4, 00:17:58.787 "num_base_bdevs_discovered": 4, 00:17:58.787 "num_base_bdevs_operational": 4, 00:17:58.787 "base_bdevs_list": [ 00:17:58.787 { 00:17:58.787 "name": "BaseBdev1", 00:17:58.787 "uuid": "4800d241-59c6-4feb-aa89-b6aefce9f7e2", 00:17:58.787 "is_configured": true, 00:17:58.787 "data_offset": 0, 00:17:58.787 "data_size": 65536 00:17:58.787 }, 00:17:58.787 { 00:17:58.787 "name": "BaseBdev2", 00:17:58.787 "uuid": "9bc9564b-d217-4154-8372-2b25632ba4ec", 00:17:58.787 "is_configured": true, 00:17:58.787 "data_offset": 0, 00:17:58.787 "data_size": 65536 00:17:58.787 }, 00:17:58.787 { 00:17:58.787 "name": "BaseBdev3", 00:17:58.787 "uuid": "86775ebb-3571-4385-8870-7268bbcf368f", 00:17:58.787 "is_configured": true, 00:17:58.787 "data_offset": 0, 00:17:58.787 "data_size": 65536 00:17:58.787 }, 00:17:58.787 { 00:17:58.787 "name": "BaseBdev4", 00:17:58.787 "uuid": "153fd093-e726-430c-88d4-822d03a91b4b", 00:17:58.787 "is_configured": true, 00:17:58.787 "data_offset": 0, 00:17:58.787 "data_size": 65536 00:17:58.787 } 00:17:58.787 ] 00:17:58.787 } 00:17:58.787 } 00:17:58.787 }' 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:59.160 BaseBdev2 00:17:59.160 BaseBdev3 00:17:59.160 BaseBdev4' 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.160 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.160 [2024-12-05 20:11:00.502114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.435 "name": "Existed_Raid", 00:17:59.435 "uuid": "efe39634-0636-45c1-a20a-cf3b0e403526", 00:17:59.435 "strip_size_kb": 64, 00:17:59.435 "state": "online", 00:17:59.435 "raid_level": "raid5f", 00:17:59.435 "superblock": false, 00:17:59.435 "num_base_bdevs": 4, 00:17:59.435 "num_base_bdevs_discovered": 3, 00:17:59.435 "num_base_bdevs_operational": 3, 00:17:59.435 "base_bdevs_list": [ 00:17:59.435 { 00:17:59.435 "name": null, 00:17:59.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.435 "is_configured": false, 00:17:59.435 "data_offset": 0, 00:17:59.435 "data_size": 65536 00:17:59.435 }, 00:17:59.435 { 00:17:59.435 "name": "BaseBdev2", 00:17:59.435 "uuid": "9bc9564b-d217-4154-8372-2b25632ba4ec", 00:17:59.435 "is_configured": true, 00:17:59.435 "data_offset": 0, 00:17:59.435 "data_size": 65536 00:17:59.435 }, 00:17:59.435 { 00:17:59.435 "name": "BaseBdev3", 00:17:59.435 "uuid": "86775ebb-3571-4385-8870-7268bbcf368f", 00:17:59.435 "is_configured": true, 00:17:59.435 "data_offset": 0, 00:17:59.435 "data_size": 65536 00:17:59.435 }, 00:17:59.435 { 00:17:59.435 "name": "BaseBdev4", 00:17:59.435 "uuid": "153fd093-e726-430c-88d4-822d03a91b4b", 00:17:59.435 "is_configured": true, 00:17:59.435 "data_offset": 0, 00:17:59.435 "data_size": 65536 00:17:59.435 } 00:17:59.435 ] 00:17:59.435 }' 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.435 20:11:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.695 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:59.695 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:59.695 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:59.695 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.695 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.695 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.695 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.695 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:59.695 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:59.695 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:59.695 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.695 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.695 [2024-12-05 20:11:01.062045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:59.695 [2024-12-05 20:11:01.062183] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.954 [2024-12-05 20:11:01.154253] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.954 [2024-12-05 20:11:01.214175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.954 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.954 [2024-12-05 20:11:01.359655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:59.954 [2024-12-05 20:11:01.359748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.214 BaseBdev2 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.214 [ 00:18:00.214 { 00:18:00.214 "name": "BaseBdev2", 00:18:00.214 "aliases": [ 00:18:00.214 "df010d65-781f-4600-8661-4f78f0beab6c" 00:18:00.214 ], 00:18:00.214 "product_name": "Malloc disk", 00:18:00.214 "block_size": 512, 00:18:00.214 "num_blocks": 65536, 00:18:00.214 "uuid": "df010d65-781f-4600-8661-4f78f0beab6c", 00:18:00.214 "assigned_rate_limits": { 00:18:00.214 "rw_ios_per_sec": 0, 00:18:00.214 "rw_mbytes_per_sec": 0, 00:18:00.214 "r_mbytes_per_sec": 0, 00:18:00.214 "w_mbytes_per_sec": 0 00:18:00.214 }, 00:18:00.214 "claimed": false, 00:18:00.214 "zoned": false, 00:18:00.214 "supported_io_types": { 00:18:00.214 "read": true, 00:18:00.214 "write": true, 00:18:00.214 "unmap": true, 00:18:00.214 "flush": true, 00:18:00.214 "reset": true, 00:18:00.214 "nvme_admin": false, 00:18:00.214 "nvme_io": false, 00:18:00.214 "nvme_io_md": false, 00:18:00.214 "write_zeroes": true, 00:18:00.214 "zcopy": true, 00:18:00.214 "get_zone_info": false, 00:18:00.214 "zone_management": false, 00:18:00.214 "zone_append": false, 00:18:00.214 "compare": false, 00:18:00.214 "compare_and_write": false, 00:18:00.214 "abort": true, 00:18:00.214 "seek_hole": false, 00:18:00.214 "seek_data": false, 00:18:00.214 "copy": true, 00:18:00.214 "nvme_iov_md": false 00:18:00.214 }, 00:18:00.214 "memory_domains": [ 00:18:00.214 { 00:18:00.214 "dma_device_id": "system", 00:18:00.214 "dma_device_type": 1 00:18:00.214 }, 00:18:00.214 { 00:18:00.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.214 "dma_device_type": 2 00:18:00.214 } 00:18:00.214 ], 00:18:00.214 "driver_specific": {} 00:18:00.214 } 00:18:00.214 ] 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.214 BaseBdev3 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.214 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.214 [ 00:18:00.214 { 00:18:00.214 "name": "BaseBdev3", 00:18:00.214 "aliases": [ 00:18:00.214 "c1e594fd-2544-42d8-94bb-952437210082" 00:18:00.214 ], 00:18:00.214 "product_name": "Malloc disk", 00:18:00.214 "block_size": 512, 00:18:00.214 "num_blocks": 65536, 00:18:00.214 "uuid": "c1e594fd-2544-42d8-94bb-952437210082", 00:18:00.214 "assigned_rate_limits": { 00:18:00.214 "rw_ios_per_sec": 0, 00:18:00.214 "rw_mbytes_per_sec": 0, 00:18:00.214 "r_mbytes_per_sec": 0, 00:18:00.214 "w_mbytes_per_sec": 0 00:18:00.214 }, 00:18:00.214 "claimed": false, 00:18:00.214 "zoned": false, 00:18:00.214 "supported_io_types": { 00:18:00.214 "read": true, 00:18:00.214 "write": true, 00:18:00.214 "unmap": true, 00:18:00.214 "flush": true, 00:18:00.214 "reset": true, 00:18:00.214 "nvme_admin": false, 00:18:00.214 "nvme_io": false, 00:18:00.214 "nvme_io_md": false, 00:18:00.214 "write_zeroes": true, 00:18:00.214 "zcopy": true, 00:18:00.214 "get_zone_info": false, 00:18:00.214 "zone_management": false, 00:18:00.475 "zone_append": false, 00:18:00.475 "compare": false, 00:18:00.475 "compare_and_write": false, 00:18:00.475 "abort": true, 00:18:00.475 "seek_hole": false, 00:18:00.475 "seek_data": false, 00:18:00.475 "copy": true, 00:18:00.475 "nvme_iov_md": false 00:18:00.475 }, 00:18:00.475 "memory_domains": [ 00:18:00.475 { 00:18:00.475 "dma_device_id": "system", 00:18:00.475 "dma_device_type": 1 00:18:00.475 }, 00:18:00.475 { 00:18:00.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.475 "dma_device_type": 2 00:18:00.475 } 00:18:00.475 ], 00:18:00.475 "driver_specific": {} 00:18:00.475 } 00:18:00.475 ] 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.475 BaseBdev4 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.475 [ 00:18:00.475 { 00:18:00.475 "name": "BaseBdev4", 00:18:00.475 "aliases": [ 00:18:00.475 "39fdc390-3f30-41e0-9f34-5d03b7156b9b" 00:18:00.475 ], 00:18:00.475 "product_name": "Malloc disk", 00:18:00.475 "block_size": 512, 00:18:00.475 "num_blocks": 65536, 00:18:00.475 "uuid": "39fdc390-3f30-41e0-9f34-5d03b7156b9b", 00:18:00.475 "assigned_rate_limits": { 00:18:00.475 "rw_ios_per_sec": 0, 00:18:00.475 "rw_mbytes_per_sec": 0, 00:18:00.475 "r_mbytes_per_sec": 0, 00:18:00.475 "w_mbytes_per_sec": 0 00:18:00.475 }, 00:18:00.475 "claimed": false, 00:18:00.475 "zoned": false, 00:18:00.475 "supported_io_types": { 00:18:00.475 "read": true, 00:18:00.475 "write": true, 00:18:00.475 "unmap": true, 00:18:00.475 "flush": true, 00:18:00.475 "reset": true, 00:18:00.475 "nvme_admin": false, 00:18:00.475 "nvme_io": false, 00:18:00.475 "nvme_io_md": false, 00:18:00.475 "write_zeroes": true, 00:18:00.475 "zcopy": true, 00:18:00.475 "get_zone_info": false, 00:18:00.475 "zone_management": false, 00:18:00.475 "zone_append": false, 00:18:00.475 "compare": false, 00:18:00.475 "compare_and_write": false, 00:18:00.475 "abort": true, 00:18:00.475 "seek_hole": false, 00:18:00.475 "seek_data": false, 00:18:00.475 "copy": true, 00:18:00.475 "nvme_iov_md": false 00:18:00.475 }, 00:18:00.475 "memory_domains": [ 00:18:00.475 { 00:18:00.475 "dma_device_id": "system", 00:18:00.475 "dma_device_type": 1 00:18:00.475 }, 00:18:00.475 { 00:18:00.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.475 "dma_device_type": 2 00:18:00.475 } 00:18:00.475 ], 00:18:00.475 "driver_specific": {} 00:18:00.475 } 00:18:00.475 ] 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.475 [2024-12-05 20:11:01.746824] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:00.475 [2024-12-05 20:11:01.746915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:00.475 [2024-12-05 20:11:01.746974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:00.475 [2024-12-05 20:11:01.748801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:00.475 [2024-12-05 20:11:01.748904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.475 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.475 "name": "Existed_Raid", 00:18:00.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.475 "strip_size_kb": 64, 00:18:00.475 "state": "configuring", 00:18:00.475 "raid_level": "raid5f", 00:18:00.475 "superblock": false, 00:18:00.475 "num_base_bdevs": 4, 00:18:00.475 "num_base_bdevs_discovered": 3, 00:18:00.475 "num_base_bdevs_operational": 4, 00:18:00.475 "base_bdevs_list": [ 00:18:00.475 { 00:18:00.475 "name": "BaseBdev1", 00:18:00.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.475 "is_configured": false, 00:18:00.475 "data_offset": 0, 00:18:00.475 "data_size": 0 00:18:00.476 }, 00:18:00.476 { 00:18:00.476 "name": "BaseBdev2", 00:18:00.476 "uuid": "df010d65-781f-4600-8661-4f78f0beab6c", 00:18:00.476 "is_configured": true, 00:18:00.476 "data_offset": 0, 00:18:00.476 "data_size": 65536 00:18:00.476 }, 00:18:00.476 { 00:18:00.476 "name": "BaseBdev3", 00:18:00.476 "uuid": "c1e594fd-2544-42d8-94bb-952437210082", 00:18:00.476 "is_configured": true, 00:18:00.476 "data_offset": 0, 00:18:00.476 "data_size": 65536 00:18:00.476 }, 00:18:00.476 { 00:18:00.476 "name": "BaseBdev4", 00:18:00.476 "uuid": "39fdc390-3f30-41e0-9f34-5d03b7156b9b", 00:18:00.476 "is_configured": true, 00:18:00.476 "data_offset": 0, 00:18:00.476 "data_size": 65536 00:18:00.476 } 00:18:00.476 ] 00:18:00.476 }' 00:18:00.476 20:11:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.476 20:11:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.735 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:00.735 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.735 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.735 [2024-12-05 20:11:02.162105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:00.735 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.735 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:00.735 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.735 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.735 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.735 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.735 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:00.735 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.735 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.735 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.735 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.007 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.007 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.007 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.007 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.007 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.007 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.007 "name": "Existed_Raid", 00:18:01.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.007 "strip_size_kb": 64, 00:18:01.007 "state": "configuring", 00:18:01.007 "raid_level": "raid5f", 00:18:01.007 "superblock": false, 00:18:01.007 "num_base_bdevs": 4, 00:18:01.007 "num_base_bdevs_discovered": 2, 00:18:01.007 "num_base_bdevs_operational": 4, 00:18:01.007 "base_bdevs_list": [ 00:18:01.007 { 00:18:01.007 "name": "BaseBdev1", 00:18:01.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.007 "is_configured": false, 00:18:01.007 "data_offset": 0, 00:18:01.007 "data_size": 0 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "name": null, 00:18:01.007 "uuid": "df010d65-781f-4600-8661-4f78f0beab6c", 00:18:01.007 "is_configured": false, 00:18:01.007 "data_offset": 0, 00:18:01.007 "data_size": 65536 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "name": "BaseBdev3", 00:18:01.007 "uuid": "c1e594fd-2544-42d8-94bb-952437210082", 00:18:01.007 "is_configured": true, 00:18:01.007 "data_offset": 0, 00:18:01.007 "data_size": 65536 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "name": "BaseBdev4", 00:18:01.007 "uuid": "39fdc390-3f30-41e0-9f34-5d03b7156b9b", 00:18:01.007 "is_configured": true, 00:18:01.007 "data_offset": 0, 00:18:01.007 "data_size": 65536 00:18:01.007 } 00:18:01.007 ] 00:18:01.007 }' 00:18:01.007 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.007 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.267 [2024-12-05 20:11:02.668776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.267 BaseBdev1 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.267 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.267 [ 00:18:01.267 { 00:18:01.267 "name": "BaseBdev1", 00:18:01.267 "aliases": [ 00:18:01.267 "7355e7a8-2930-4c02-a5b4-778fb60a5ef6" 00:18:01.267 ], 00:18:01.267 "product_name": "Malloc disk", 00:18:01.267 "block_size": 512, 00:18:01.267 "num_blocks": 65536, 00:18:01.267 "uuid": "7355e7a8-2930-4c02-a5b4-778fb60a5ef6", 00:18:01.267 "assigned_rate_limits": { 00:18:01.267 "rw_ios_per_sec": 0, 00:18:01.267 "rw_mbytes_per_sec": 0, 00:18:01.267 "r_mbytes_per_sec": 0, 00:18:01.267 "w_mbytes_per_sec": 0 00:18:01.267 }, 00:18:01.267 "claimed": true, 00:18:01.267 "claim_type": "exclusive_write", 00:18:01.267 "zoned": false, 00:18:01.267 "supported_io_types": { 00:18:01.267 "read": true, 00:18:01.267 "write": true, 00:18:01.267 "unmap": true, 00:18:01.267 "flush": true, 00:18:01.267 "reset": true, 00:18:01.267 "nvme_admin": false, 00:18:01.267 "nvme_io": false, 00:18:01.267 "nvme_io_md": false, 00:18:01.267 "write_zeroes": true, 00:18:01.267 "zcopy": true, 00:18:01.267 "get_zone_info": false, 00:18:01.267 "zone_management": false, 00:18:01.267 "zone_append": false, 00:18:01.267 "compare": false, 00:18:01.267 "compare_and_write": false, 00:18:01.527 "abort": true, 00:18:01.527 "seek_hole": false, 00:18:01.527 "seek_data": false, 00:18:01.527 "copy": true, 00:18:01.527 "nvme_iov_md": false 00:18:01.527 }, 00:18:01.527 "memory_domains": [ 00:18:01.527 { 00:18:01.527 "dma_device_id": "system", 00:18:01.527 "dma_device_type": 1 00:18:01.527 }, 00:18:01.527 { 00:18:01.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.527 "dma_device_type": 2 00:18:01.527 } 00:18:01.527 ], 00:18:01.527 "driver_specific": {} 00:18:01.527 } 00:18:01.527 ] 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.527 "name": "Existed_Raid", 00:18:01.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.527 "strip_size_kb": 64, 00:18:01.527 "state": "configuring", 00:18:01.527 "raid_level": "raid5f", 00:18:01.527 "superblock": false, 00:18:01.527 "num_base_bdevs": 4, 00:18:01.527 "num_base_bdevs_discovered": 3, 00:18:01.527 "num_base_bdevs_operational": 4, 00:18:01.527 "base_bdevs_list": [ 00:18:01.527 { 00:18:01.527 "name": "BaseBdev1", 00:18:01.527 "uuid": "7355e7a8-2930-4c02-a5b4-778fb60a5ef6", 00:18:01.527 "is_configured": true, 00:18:01.527 "data_offset": 0, 00:18:01.527 "data_size": 65536 00:18:01.527 }, 00:18:01.527 { 00:18:01.527 "name": null, 00:18:01.527 "uuid": "df010d65-781f-4600-8661-4f78f0beab6c", 00:18:01.527 "is_configured": false, 00:18:01.527 "data_offset": 0, 00:18:01.527 "data_size": 65536 00:18:01.527 }, 00:18:01.527 { 00:18:01.527 "name": "BaseBdev3", 00:18:01.527 "uuid": "c1e594fd-2544-42d8-94bb-952437210082", 00:18:01.527 "is_configured": true, 00:18:01.527 "data_offset": 0, 00:18:01.527 "data_size": 65536 00:18:01.527 }, 00:18:01.527 { 00:18:01.527 "name": "BaseBdev4", 00:18:01.527 "uuid": "39fdc390-3f30-41e0-9f34-5d03b7156b9b", 00:18:01.527 "is_configured": true, 00:18:01.527 "data_offset": 0, 00:18:01.527 "data_size": 65536 00:18:01.527 } 00:18:01.527 ] 00:18:01.527 }' 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.527 20:11:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.788 [2024-12-05 20:11:03.188010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.788 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.048 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.048 "name": "Existed_Raid", 00:18:02.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.048 "strip_size_kb": 64, 00:18:02.049 "state": "configuring", 00:18:02.049 "raid_level": "raid5f", 00:18:02.049 "superblock": false, 00:18:02.049 "num_base_bdevs": 4, 00:18:02.049 "num_base_bdevs_discovered": 2, 00:18:02.049 "num_base_bdevs_operational": 4, 00:18:02.049 "base_bdevs_list": [ 00:18:02.049 { 00:18:02.049 "name": "BaseBdev1", 00:18:02.049 "uuid": "7355e7a8-2930-4c02-a5b4-778fb60a5ef6", 00:18:02.049 "is_configured": true, 00:18:02.049 "data_offset": 0, 00:18:02.049 "data_size": 65536 00:18:02.049 }, 00:18:02.049 { 00:18:02.049 "name": null, 00:18:02.049 "uuid": "df010d65-781f-4600-8661-4f78f0beab6c", 00:18:02.049 "is_configured": false, 00:18:02.049 "data_offset": 0, 00:18:02.049 "data_size": 65536 00:18:02.049 }, 00:18:02.049 { 00:18:02.049 "name": null, 00:18:02.049 "uuid": "c1e594fd-2544-42d8-94bb-952437210082", 00:18:02.049 "is_configured": false, 00:18:02.049 "data_offset": 0, 00:18:02.049 "data_size": 65536 00:18:02.049 }, 00:18:02.049 { 00:18:02.049 "name": "BaseBdev4", 00:18:02.049 "uuid": "39fdc390-3f30-41e0-9f34-5d03b7156b9b", 00:18:02.049 "is_configured": true, 00:18:02.049 "data_offset": 0, 00:18:02.049 "data_size": 65536 00:18:02.049 } 00:18:02.049 ] 00:18:02.049 }' 00:18:02.049 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.049 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.309 [2024-12-05 20:11:03.627238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.309 "name": "Existed_Raid", 00:18:02.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.309 "strip_size_kb": 64, 00:18:02.309 "state": "configuring", 00:18:02.309 "raid_level": "raid5f", 00:18:02.309 "superblock": false, 00:18:02.309 "num_base_bdevs": 4, 00:18:02.309 "num_base_bdevs_discovered": 3, 00:18:02.309 "num_base_bdevs_operational": 4, 00:18:02.309 "base_bdevs_list": [ 00:18:02.309 { 00:18:02.309 "name": "BaseBdev1", 00:18:02.309 "uuid": "7355e7a8-2930-4c02-a5b4-778fb60a5ef6", 00:18:02.309 "is_configured": true, 00:18:02.309 "data_offset": 0, 00:18:02.309 "data_size": 65536 00:18:02.309 }, 00:18:02.309 { 00:18:02.309 "name": null, 00:18:02.309 "uuid": "df010d65-781f-4600-8661-4f78f0beab6c", 00:18:02.309 "is_configured": false, 00:18:02.309 "data_offset": 0, 00:18:02.309 "data_size": 65536 00:18:02.309 }, 00:18:02.309 { 00:18:02.309 "name": "BaseBdev3", 00:18:02.309 "uuid": "c1e594fd-2544-42d8-94bb-952437210082", 00:18:02.309 "is_configured": true, 00:18:02.309 "data_offset": 0, 00:18:02.309 "data_size": 65536 00:18:02.309 }, 00:18:02.309 { 00:18:02.309 "name": "BaseBdev4", 00:18:02.309 "uuid": "39fdc390-3f30-41e0-9f34-5d03b7156b9b", 00:18:02.309 "is_configured": true, 00:18:02.309 "data_offset": 0, 00:18:02.309 "data_size": 65536 00:18:02.309 } 00:18:02.309 ] 00:18:02.309 }' 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.309 20:11:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.880 [2024-12-05 20:11:04.054525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.880 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.880 "name": "Existed_Raid", 00:18:02.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.880 "strip_size_kb": 64, 00:18:02.880 "state": "configuring", 00:18:02.880 "raid_level": "raid5f", 00:18:02.880 "superblock": false, 00:18:02.880 "num_base_bdevs": 4, 00:18:02.880 "num_base_bdevs_discovered": 2, 00:18:02.880 "num_base_bdevs_operational": 4, 00:18:02.880 "base_bdevs_list": [ 00:18:02.880 { 00:18:02.880 "name": null, 00:18:02.880 "uuid": "7355e7a8-2930-4c02-a5b4-778fb60a5ef6", 00:18:02.880 "is_configured": false, 00:18:02.880 "data_offset": 0, 00:18:02.880 "data_size": 65536 00:18:02.880 }, 00:18:02.881 { 00:18:02.881 "name": null, 00:18:02.881 "uuid": "df010d65-781f-4600-8661-4f78f0beab6c", 00:18:02.881 "is_configured": false, 00:18:02.881 "data_offset": 0, 00:18:02.881 "data_size": 65536 00:18:02.881 }, 00:18:02.881 { 00:18:02.881 "name": "BaseBdev3", 00:18:02.881 "uuid": "c1e594fd-2544-42d8-94bb-952437210082", 00:18:02.881 "is_configured": true, 00:18:02.881 "data_offset": 0, 00:18:02.881 "data_size": 65536 00:18:02.881 }, 00:18:02.881 { 00:18:02.881 "name": "BaseBdev4", 00:18:02.881 "uuid": "39fdc390-3f30-41e0-9f34-5d03b7156b9b", 00:18:02.881 "is_configured": true, 00:18:02.881 "data_offset": 0, 00:18:02.881 "data_size": 65536 00:18:02.881 } 00:18:02.881 ] 00:18:02.881 }' 00:18:02.881 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.881 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.140 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.140 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:03.140 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.140 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.140 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.400 [2024-12-05 20:11:04.599134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.400 "name": "Existed_Raid", 00:18:03.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.400 "strip_size_kb": 64, 00:18:03.400 "state": "configuring", 00:18:03.400 "raid_level": "raid5f", 00:18:03.400 "superblock": false, 00:18:03.400 "num_base_bdevs": 4, 00:18:03.400 "num_base_bdevs_discovered": 3, 00:18:03.400 "num_base_bdevs_operational": 4, 00:18:03.400 "base_bdevs_list": [ 00:18:03.400 { 00:18:03.400 "name": null, 00:18:03.400 "uuid": "7355e7a8-2930-4c02-a5b4-778fb60a5ef6", 00:18:03.400 "is_configured": false, 00:18:03.400 "data_offset": 0, 00:18:03.400 "data_size": 65536 00:18:03.400 }, 00:18:03.400 { 00:18:03.400 "name": "BaseBdev2", 00:18:03.400 "uuid": "df010d65-781f-4600-8661-4f78f0beab6c", 00:18:03.400 "is_configured": true, 00:18:03.400 "data_offset": 0, 00:18:03.400 "data_size": 65536 00:18:03.400 }, 00:18:03.400 { 00:18:03.400 "name": "BaseBdev3", 00:18:03.400 "uuid": "c1e594fd-2544-42d8-94bb-952437210082", 00:18:03.400 "is_configured": true, 00:18:03.400 "data_offset": 0, 00:18:03.400 "data_size": 65536 00:18:03.400 }, 00:18:03.400 { 00:18:03.400 "name": "BaseBdev4", 00:18:03.400 "uuid": "39fdc390-3f30-41e0-9f34-5d03b7156b9b", 00:18:03.400 "is_configured": true, 00:18:03.400 "data_offset": 0, 00:18:03.400 "data_size": 65536 00:18:03.400 } 00:18:03.400 ] 00:18:03.400 }' 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.400 20:11:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.660 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.660 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.660 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.660 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:03.660 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.660 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:03.660 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.660 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:03.660 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.660 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.660 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.919 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7355e7a8-2930-4c02-a5b4-778fb60a5ef6 00:18:03.919 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.919 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.919 [2024-12-05 20:11:05.162086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:03.919 [2024-12-05 20:11:05.162207] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:03.919 [2024-12-05 20:11:05.162232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:03.919 [2024-12-05 20:11:05.162514] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:03.919 [2024-12-05 20:11:05.169113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:03.919 [2024-12-05 20:11:05.169172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:03.919 [2024-12-05 20:11:05.169468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.919 NewBaseBdev 00:18:03.919 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.920 [ 00:18:03.920 { 00:18:03.920 "name": "NewBaseBdev", 00:18:03.920 "aliases": [ 00:18:03.920 "7355e7a8-2930-4c02-a5b4-778fb60a5ef6" 00:18:03.920 ], 00:18:03.920 "product_name": "Malloc disk", 00:18:03.920 "block_size": 512, 00:18:03.920 "num_blocks": 65536, 00:18:03.920 "uuid": "7355e7a8-2930-4c02-a5b4-778fb60a5ef6", 00:18:03.920 "assigned_rate_limits": { 00:18:03.920 "rw_ios_per_sec": 0, 00:18:03.920 "rw_mbytes_per_sec": 0, 00:18:03.920 "r_mbytes_per_sec": 0, 00:18:03.920 "w_mbytes_per_sec": 0 00:18:03.920 }, 00:18:03.920 "claimed": true, 00:18:03.920 "claim_type": "exclusive_write", 00:18:03.920 "zoned": false, 00:18:03.920 "supported_io_types": { 00:18:03.920 "read": true, 00:18:03.920 "write": true, 00:18:03.920 "unmap": true, 00:18:03.920 "flush": true, 00:18:03.920 "reset": true, 00:18:03.920 "nvme_admin": false, 00:18:03.920 "nvme_io": false, 00:18:03.920 "nvme_io_md": false, 00:18:03.920 "write_zeroes": true, 00:18:03.920 "zcopy": true, 00:18:03.920 "get_zone_info": false, 00:18:03.920 "zone_management": false, 00:18:03.920 "zone_append": false, 00:18:03.920 "compare": false, 00:18:03.920 "compare_and_write": false, 00:18:03.920 "abort": true, 00:18:03.920 "seek_hole": false, 00:18:03.920 "seek_data": false, 00:18:03.920 "copy": true, 00:18:03.920 "nvme_iov_md": false 00:18:03.920 }, 00:18:03.920 "memory_domains": [ 00:18:03.920 { 00:18:03.920 "dma_device_id": "system", 00:18:03.920 "dma_device_type": 1 00:18:03.920 }, 00:18:03.920 { 00:18:03.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.920 "dma_device_type": 2 00:18:03.920 } 00:18:03.920 ], 00:18:03.920 "driver_specific": {} 00:18:03.920 } 00:18:03.920 ] 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.920 "name": "Existed_Raid", 00:18:03.920 "uuid": "917956d7-b291-4f60-af7b-00f7455a066b", 00:18:03.920 "strip_size_kb": 64, 00:18:03.920 "state": "online", 00:18:03.920 "raid_level": "raid5f", 00:18:03.920 "superblock": false, 00:18:03.920 "num_base_bdevs": 4, 00:18:03.920 "num_base_bdevs_discovered": 4, 00:18:03.920 "num_base_bdevs_operational": 4, 00:18:03.920 "base_bdevs_list": [ 00:18:03.920 { 00:18:03.920 "name": "NewBaseBdev", 00:18:03.920 "uuid": "7355e7a8-2930-4c02-a5b4-778fb60a5ef6", 00:18:03.920 "is_configured": true, 00:18:03.920 "data_offset": 0, 00:18:03.920 "data_size": 65536 00:18:03.920 }, 00:18:03.920 { 00:18:03.920 "name": "BaseBdev2", 00:18:03.920 "uuid": "df010d65-781f-4600-8661-4f78f0beab6c", 00:18:03.920 "is_configured": true, 00:18:03.920 "data_offset": 0, 00:18:03.920 "data_size": 65536 00:18:03.920 }, 00:18:03.920 { 00:18:03.920 "name": "BaseBdev3", 00:18:03.920 "uuid": "c1e594fd-2544-42d8-94bb-952437210082", 00:18:03.920 "is_configured": true, 00:18:03.920 "data_offset": 0, 00:18:03.920 "data_size": 65536 00:18:03.920 }, 00:18:03.920 { 00:18:03.920 "name": "BaseBdev4", 00:18:03.920 "uuid": "39fdc390-3f30-41e0-9f34-5d03b7156b9b", 00:18:03.920 "is_configured": true, 00:18:03.920 "data_offset": 0, 00:18:03.920 "data_size": 65536 00:18:03.920 } 00:18:03.920 ] 00:18:03.920 }' 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.920 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.179 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:04.179 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:04.179 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:04.179 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:04.179 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:04.179 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:04.179 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:04.179 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:04.179 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.179 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.179 [2024-12-05 20:11:05.605089] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.438 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.438 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:04.438 "name": "Existed_Raid", 00:18:04.438 "aliases": [ 00:18:04.438 "917956d7-b291-4f60-af7b-00f7455a066b" 00:18:04.438 ], 00:18:04.438 "product_name": "Raid Volume", 00:18:04.438 "block_size": 512, 00:18:04.438 "num_blocks": 196608, 00:18:04.438 "uuid": "917956d7-b291-4f60-af7b-00f7455a066b", 00:18:04.438 "assigned_rate_limits": { 00:18:04.438 "rw_ios_per_sec": 0, 00:18:04.438 "rw_mbytes_per_sec": 0, 00:18:04.438 "r_mbytes_per_sec": 0, 00:18:04.438 "w_mbytes_per_sec": 0 00:18:04.438 }, 00:18:04.438 "claimed": false, 00:18:04.438 "zoned": false, 00:18:04.438 "supported_io_types": { 00:18:04.438 "read": true, 00:18:04.438 "write": true, 00:18:04.438 "unmap": false, 00:18:04.438 "flush": false, 00:18:04.438 "reset": true, 00:18:04.438 "nvme_admin": false, 00:18:04.438 "nvme_io": false, 00:18:04.438 "nvme_io_md": false, 00:18:04.438 "write_zeroes": true, 00:18:04.438 "zcopy": false, 00:18:04.438 "get_zone_info": false, 00:18:04.438 "zone_management": false, 00:18:04.438 "zone_append": false, 00:18:04.438 "compare": false, 00:18:04.438 "compare_and_write": false, 00:18:04.438 "abort": false, 00:18:04.438 "seek_hole": false, 00:18:04.438 "seek_data": false, 00:18:04.438 "copy": false, 00:18:04.438 "nvme_iov_md": false 00:18:04.438 }, 00:18:04.438 "driver_specific": { 00:18:04.438 "raid": { 00:18:04.438 "uuid": "917956d7-b291-4f60-af7b-00f7455a066b", 00:18:04.438 "strip_size_kb": 64, 00:18:04.438 "state": "online", 00:18:04.438 "raid_level": "raid5f", 00:18:04.438 "superblock": false, 00:18:04.438 "num_base_bdevs": 4, 00:18:04.438 "num_base_bdevs_discovered": 4, 00:18:04.438 "num_base_bdevs_operational": 4, 00:18:04.438 "base_bdevs_list": [ 00:18:04.438 { 00:18:04.438 "name": "NewBaseBdev", 00:18:04.438 "uuid": "7355e7a8-2930-4c02-a5b4-778fb60a5ef6", 00:18:04.438 "is_configured": true, 00:18:04.438 "data_offset": 0, 00:18:04.438 "data_size": 65536 00:18:04.438 }, 00:18:04.438 { 00:18:04.438 "name": "BaseBdev2", 00:18:04.438 "uuid": "df010d65-781f-4600-8661-4f78f0beab6c", 00:18:04.438 "is_configured": true, 00:18:04.438 "data_offset": 0, 00:18:04.438 "data_size": 65536 00:18:04.438 }, 00:18:04.438 { 00:18:04.438 "name": "BaseBdev3", 00:18:04.438 "uuid": "c1e594fd-2544-42d8-94bb-952437210082", 00:18:04.438 "is_configured": true, 00:18:04.438 "data_offset": 0, 00:18:04.438 "data_size": 65536 00:18:04.438 }, 00:18:04.438 { 00:18:04.438 "name": "BaseBdev4", 00:18:04.438 "uuid": "39fdc390-3f30-41e0-9f34-5d03b7156b9b", 00:18:04.438 "is_configured": true, 00:18:04.439 "data_offset": 0, 00:18:04.439 "data_size": 65536 00:18:04.439 } 00:18:04.439 ] 00:18:04.439 } 00:18:04.439 } 00:18:04.439 }' 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:04.439 BaseBdev2 00:18:04.439 BaseBdev3 00:18:04.439 BaseBdev4' 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.439 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.699 [2024-12-05 20:11:05.932351] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:04.699 [2024-12-05 20:11:05.932418] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.699 [2024-12-05 20:11:05.932493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.699 [2024-12-05 20:11:05.932807] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.699 [2024-12-05 20:11:05.932819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82848 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82848 ']' 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82848 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82848 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82848' 00:18:04.699 killing process with pid 82848 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82848 00:18:04.699 [2024-12-05 20:11:05.982320] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:04.699 20:11:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82848 00:18:04.959 [2024-12-05 20:11:06.351070] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:06.340 00:18:06.340 real 0m11.013s 00:18:06.340 user 0m17.502s 00:18:06.340 sys 0m1.948s 00:18:06.340 ************************************ 00:18:06.340 END TEST raid5f_state_function_test 00:18:06.340 ************************************ 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.340 20:11:07 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:18:06.340 20:11:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:06.340 20:11:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.340 20:11:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:06.340 ************************************ 00:18:06.340 START TEST raid5f_state_function_test_sb 00:18:06.340 ************************************ 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83518 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83518' 00:18:06.340 Process raid pid: 83518 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83518 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83518 ']' 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.340 20:11:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.340 [2024-12-05 20:11:07.584009] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:18:06.340 [2024-12-05 20:11:07.584195] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.340 [2024-12-05 20:11:07.755576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.600 [2024-12-05 20:11:07.861400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.860 [2024-12-05 20:11:08.060114] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.860 [2024-12-05 20:11:08.060216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.120 [2024-12-05 20:11:08.398061] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:07.120 [2024-12-05 20:11:08.398114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:07.120 [2024-12-05 20:11:08.398124] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:07.120 [2024-12-05 20:11:08.398134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:07.120 [2024-12-05 20:11:08.398140] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:07.120 [2024-12-05 20:11:08.398149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:07.120 [2024-12-05 20:11:08.398155] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:07.120 [2024-12-05 20:11:08.398163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.120 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.120 "name": "Existed_Raid", 00:18:07.120 "uuid": "f5a18644-e057-4edb-8e3f-93721c2e416a", 00:18:07.120 "strip_size_kb": 64, 00:18:07.120 "state": "configuring", 00:18:07.120 "raid_level": "raid5f", 00:18:07.120 "superblock": true, 00:18:07.120 "num_base_bdevs": 4, 00:18:07.120 "num_base_bdevs_discovered": 0, 00:18:07.120 "num_base_bdevs_operational": 4, 00:18:07.120 "base_bdevs_list": [ 00:18:07.120 { 00:18:07.120 "name": "BaseBdev1", 00:18:07.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.120 "is_configured": false, 00:18:07.120 "data_offset": 0, 00:18:07.121 "data_size": 0 00:18:07.121 }, 00:18:07.121 { 00:18:07.121 "name": "BaseBdev2", 00:18:07.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.121 "is_configured": false, 00:18:07.121 "data_offset": 0, 00:18:07.121 "data_size": 0 00:18:07.121 }, 00:18:07.121 { 00:18:07.121 "name": "BaseBdev3", 00:18:07.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.121 "is_configured": false, 00:18:07.121 "data_offset": 0, 00:18:07.121 "data_size": 0 00:18:07.121 }, 00:18:07.121 { 00:18:07.121 "name": "BaseBdev4", 00:18:07.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.121 "is_configured": false, 00:18:07.121 "data_offset": 0, 00:18:07.121 "data_size": 0 00:18:07.121 } 00:18:07.121 ] 00:18:07.121 }' 00:18:07.121 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.121 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.691 [2024-12-05 20:11:08.857217] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:07.691 [2024-12-05 20:11:08.857298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.691 [2024-12-05 20:11:08.869203] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:07.691 [2024-12-05 20:11:08.869279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:07.691 [2024-12-05 20:11:08.869307] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:07.691 [2024-12-05 20:11:08.869330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:07.691 [2024-12-05 20:11:08.869347] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:07.691 [2024-12-05 20:11:08.869368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:07.691 [2024-12-05 20:11:08.869385] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:07.691 [2024-12-05 20:11:08.869405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.691 [2024-12-05 20:11:08.914441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:07.691 BaseBdev1 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.691 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.691 [ 00:18:07.691 { 00:18:07.691 "name": "BaseBdev1", 00:18:07.691 "aliases": [ 00:18:07.691 "e1b20fe8-4f27-4fb1-ab2f-eed3dde55914" 00:18:07.691 ], 00:18:07.691 "product_name": "Malloc disk", 00:18:07.691 "block_size": 512, 00:18:07.691 "num_blocks": 65536, 00:18:07.691 "uuid": "e1b20fe8-4f27-4fb1-ab2f-eed3dde55914", 00:18:07.691 "assigned_rate_limits": { 00:18:07.691 "rw_ios_per_sec": 0, 00:18:07.691 "rw_mbytes_per_sec": 0, 00:18:07.691 "r_mbytes_per_sec": 0, 00:18:07.691 "w_mbytes_per_sec": 0 00:18:07.691 }, 00:18:07.691 "claimed": true, 00:18:07.691 "claim_type": "exclusive_write", 00:18:07.691 "zoned": false, 00:18:07.691 "supported_io_types": { 00:18:07.691 "read": true, 00:18:07.691 "write": true, 00:18:07.691 "unmap": true, 00:18:07.691 "flush": true, 00:18:07.691 "reset": true, 00:18:07.691 "nvme_admin": false, 00:18:07.691 "nvme_io": false, 00:18:07.691 "nvme_io_md": false, 00:18:07.691 "write_zeroes": true, 00:18:07.691 "zcopy": true, 00:18:07.691 "get_zone_info": false, 00:18:07.691 "zone_management": false, 00:18:07.691 "zone_append": false, 00:18:07.691 "compare": false, 00:18:07.691 "compare_and_write": false, 00:18:07.691 "abort": true, 00:18:07.691 "seek_hole": false, 00:18:07.692 "seek_data": false, 00:18:07.692 "copy": true, 00:18:07.692 "nvme_iov_md": false 00:18:07.692 }, 00:18:07.692 "memory_domains": [ 00:18:07.692 { 00:18:07.692 "dma_device_id": "system", 00:18:07.692 "dma_device_type": 1 00:18:07.692 }, 00:18:07.692 { 00:18:07.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.692 "dma_device_type": 2 00:18:07.692 } 00:18:07.692 ], 00:18:07.692 "driver_specific": {} 00:18:07.692 } 00:18:07.692 ] 00:18:07.692 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.692 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:07.692 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:07.692 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.692 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.692 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.692 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.692 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:07.692 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.692 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.692 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.692 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.692 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.692 20:11:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.692 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.692 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.692 20:11:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.692 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.692 "name": "Existed_Raid", 00:18:07.692 "uuid": "91f8cdd3-3119-4fe9-8a9f-8a9450fdf540", 00:18:07.692 "strip_size_kb": 64, 00:18:07.692 "state": "configuring", 00:18:07.692 "raid_level": "raid5f", 00:18:07.692 "superblock": true, 00:18:07.692 "num_base_bdevs": 4, 00:18:07.692 "num_base_bdevs_discovered": 1, 00:18:07.692 "num_base_bdevs_operational": 4, 00:18:07.692 "base_bdevs_list": [ 00:18:07.692 { 00:18:07.692 "name": "BaseBdev1", 00:18:07.692 "uuid": "e1b20fe8-4f27-4fb1-ab2f-eed3dde55914", 00:18:07.692 "is_configured": true, 00:18:07.692 "data_offset": 2048, 00:18:07.692 "data_size": 63488 00:18:07.692 }, 00:18:07.692 { 00:18:07.692 "name": "BaseBdev2", 00:18:07.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.692 "is_configured": false, 00:18:07.692 "data_offset": 0, 00:18:07.692 "data_size": 0 00:18:07.692 }, 00:18:07.692 { 00:18:07.692 "name": "BaseBdev3", 00:18:07.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.692 "is_configured": false, 00:18:07.692 "data_offset": 0, 00:18:07.692 "data_size": 0 00:18:07.692 }, 00:18:07.692 { 00:18:07.692 "name": "BaseBdev4", 00:18:07.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.692 "is_configured": false, 00:18:07.692 "data_offset": 0, 00:18:07.692 "data_size": 0 00:18:07.692 } 00:18:07.692 ] 00:18:07.692 }' 00:18:07.692 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.692 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.952 [2024-12-05 20:11:09.365702] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:07.952 [2024-12-05 20:11:09.365745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.952 [2024-12-05 20:11:09.377744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:07.952 [2024-12-05 20:11:09.379492] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:07.952 [2024-12-05 20:11:09.379563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:07.952 [2024-12-05 20:11:09.379607] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:07.952 [2024-12-05 20:11:09.379631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:07.952 [2024-12-05 20:11:09.379649] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:07.952 [2024-12-05 20:11:09.379669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.952 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.212 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.212 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.212 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.212 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.212 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.212 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.212 "name": "Existed_Raid", 00:18:08.212 "uuid": "6d159542-12fa-4791-92ce-72a3407eca10", 00:18:08.212 "strip_size_kb": 64, 00:18:08.212 "state": "configuring", 00:18:08.212 "raid_level": "raid5f", 00:18:08.212 "superblock": true, 00:18:08.212 "num_base_bdevs": 4, 00:18:08.212 "num_base_bdevs_discovered": 1, 00:18:08.212 "num_base_bdevs_operational": 4, 00:18:08.212 "base_bdevs_list": [ 00:18:08.212 { 00:18:08.212 "name": "BaseBdev1", 00:18:08.212 "uuid": "e1b20fe8-4f27-4fb1-ab2f-eed3dde55914", 00:18:08.212 "is_configured": true, 00:18:08.212 "data_offset": 2048, 00:18:08.212 "data_size": 63488 00:18:08.212 }, 00:18:08.212 { 00:18:08.212 "name": "BaseBdev2", 00:18:08.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.212 "is_configured": false, 00:18:08.212 "data_offset": 0, 00:18:08.212 "data_size": 0 00:18:08.212 }, 00:18:08.212 { 00:18:08.212 "name": "BaseBdev3", 00:18:08.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.212 "is_configured": false, 00:18:08.212 "data_offset": 0, 00:18:08.212 "data_size": 0 00:18:08.212 }, 00:18:08.212 { 00:18:08.212 "name": "BaseBdev4", 00:18:08.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.212 "is_configured": false, 00:18:08.212 "data_offset": 0, 00:18:08.212 "data_size": 0 00:18:08.212 } 00:18:08.212 ] 00:18:08.212 }' 00:18:08.212 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.212 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.472 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:08.472 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.472 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.472 [2024-12-05 20:11:09.880935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:08.472 BaseBdev2 00:18:08.472 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.472 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:08.472 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:08.472 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:08.472 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:08.472 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:08.472 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:08.472 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:08.472 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.472 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.472 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.472 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:08.472 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.472 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.472 [ 00:18:08.472 { 00:18:08.472 "name": "BaseBdev2", 00:18:08.472 "aliases": [ 00:18:08.472 "0d15908a-53ef-4c92-a0b3-cbd0f0846d62" 00:18:08.472 ], 00:18:08.472 "product_name": "Malloc disk", 00:18:08.472 "block_size": 512, 00:18:08.472 "num_blocks": 65536, 00:18:08.472 "uuid": "0d15908a-53ef-4c92-a0b3-cbd0f0846d62", 00:18:08.732 "assigned_rate_limits": { 00:18:08.732 "rw_ios_per_sec": 0, 00:18:08.732 "rw_mbytes_per_sec": 0, 00:18:08.732 "r_mbytes_per_sec": 0, 00:18:08.732 "w_mbytes_per_sec": 0 00:18:08.732 }, 00:18:08.732 "claimed": true, 00:18:08.732 "claim_type": "exclusive_write", 00:18:08.732 "zoned": false, 00:18:08.732 "supported_io_types": { 00:18:08.732 "read": true, 00:18:08.732 "write": true, 00:18:08.732 "unmap": true, 00:18:08.732 "flush": true, 00:18:08.732 "reset": true, 00:18:08.732 "nvme_admin": false, 00:18:08.732 "nvme_io": false, 00:18:08.732 "nvme_io_md": false, 00:18:08.732 "write_zeroes": true, 00:18:08.732 "zcopy": true, 00:18:08.732 "get_zone_info": false, 00:18:08.732 "zone_management": false, 00:18:08.732 "zone_append": false, 00:18:08.732 "compare": false, 00:18:08.732 "compare_and_write": false, 00:18:08.732 "abort": true, 00:18:08.732 "seek_hole": false, 00:18:08.732 "seek_data": false, 00:18:08.732 "copy": true, 00:18:08.732 "nvme_iov_md": false 00:18:08.732 }, 00:18:08.732 "memory_domains": [ 00:18:08.732 { 00:18:08.732 "dma_device_id": "system", 00:18:08.732 "dma_device_type": 1 00:18:08.732 }, 00:18:08.732 { 00:18:08.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.732 "dma_device_type": 2 00:18:08.732 } 00:18:08.732 ], 00:18:08.732 "driver_specific": {} 00:18:08.732 } 00:18:08.732 ] 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.732 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.733 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.733 "name": "Existed_Raid", 00:18:08.733 "uuid": "6d159542-12fa-4791-92ce-72a3407eca10", 00:18:08.733 "strip_size_kb": 64, 00:18:08.733 "state": "configuring", 00:18:08.733 "raid_level": "raid5f", 00:18:08.733 "superblock": true, 00:18:08.733 "num_base_bdevs": 4, 00:18:08.733 "num_base_bdevs_discovered": 2, 00:18:08.733 "num_base_bdevs_operational": 4, 00:18:08.733 "base_bdevs_list": [ 00:18:08.733 { 00:18:08.733 "name": "BaseBdev1", 00:18:08.733 "uuid": "e1b20fe8-4f27-4fb1-ab2f-eed3dde55914", 00:18:08.733 "is_configured": true, 00:18:08.733 "data_offset": 2048, 00:18:08.733 "data_size": 63488 00:18:08.733 }, 00:18:08.733 { 00:18:08.733 "name": "BaseBdev2", 00:18:08.733 "uuid": "0d15908a-53ef-4c92-a0b3-cbd0f0846d62", 00:18:08.733 "is_configured": true, 00:18:08.733 "data_offset": 2048, 00:18:08.733 "data_size": 63488 00:18:08.733 }, 00:18:08.733 { 00:18:08.733 "name": "BaseBdev3", 00:18:08.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.733 "is_configured": false, 00:18:08.733 "data_offset": 0, 00:18:08.733 "data_size": 0 00:18:08.733 }, 00:18:08.733 { 00:18:08.733 "name": "BaseBdev4", 00:18:08.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.733 "is_configured": false, 00:18:08.733 "data_offset": 0, 00:18:08.733 "data_size": 0 00:18:08.733 } 00:18:08.733 ] 00:18:08.733 }' 00:18:08.733 20:11:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.733 20:11:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.992 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:08.993 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.993 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.252 [2024-12-05 20:11:10.445353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:09.252 BaseBdev3 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.252 [ 00:18:09.252 { 00:18:09.252 "name": "BaseBdev3", 00:18:09.252 "aliases": [ 00:18:09.252 "3ed74450-c38a-4e85-8665-9d5f60e801a5" 00:18:09.252 ], 00:18:09.252 "product_name": "Malloc disk", 00:18:09.252 "block_size": 512, 00:18:09.252 "num_blocks": 65536, 00:18:09.252 "uuid": "3ed74450-c38a-4e85-8665-9d5f60e801a5", 00:18:09.252 "assigned_rate_limits": { 00:18:09.252 "rw_ios_per_sec": 0, 00:18:09.252 "rw_mbytes_per_sec": 0, 00:18:09.252 "r_mbytes_per_sec": 0, 00:18:09.252 "w_mbytes_per_sec": 0 00:18:09.252 }, 00:18:09.252 "claimed": true, 00:18:09.252 "claim_type": "exclusive_write", 00:18:09.252 "zoned": false, 00:18:09.252 "supported_io_types": { 00:18:09.252 "read": true, 00:18:09.252 "write": true, 00:18:09.252 "unmap": true, 00:18:09.252 "flush": true, 00:18:09.252 "reset": true, 00:18:09.252 "nvme_admin": false, 00:18:09.252 "nvme_io": false, 00:18:09.252 "nvme_io_md": false, 00:18:09.252 "write_zeroes": true, 00:18:09.252 "zcopy": true, 00:18:09.252 "get_zone_info": false, 00:18:09.252 "zone_management": false, 00:18:09.252 "zone_append": false, 00:18:09.252 "compare": false, 00:18:09.252 "compare_and_write": false, 00:18:09.252 "abort": true, 00:18:09.252 "seek_hole": false, 00:18:09.252 "seek_data": false, 00:18:09.252 "copy": true, 00:18:09.252 "nvme_iov_md": false 00:18:09.252 }, 00:18:09.252 "memory_domains": [ 00:18:09.252 { 00:18:09.252 "dma_device_id": "system", 00:18:09.252 "dma_device_type": 1 00:18:09.252 }, 00:18:09.252 { 00:18:09.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.252 "dma_device_type": 2 00:18:09.252 } 00:18:09.252 ], 00:18:09.252 "driver_specific": {} 00:18:09.252 } 00:18:09.252 ] 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.252 "name": "Existed_Raid", 00:18:09.252 "uuid": "6d159542-12fa-4791-92ce-72a3407eca10", 00:18:09.252 "strip_size_kb": 64, 00:18:09.252 "state": "configuring", 00:18:09.252 "raid_level": "raid5f", 00:18:09.252 "superblock": true, 00:18:09.252 "num_base_bdevs": 4, 00:18:09.252 "num_base_bdevs_discovered": 3, 00:18:09.252 "num_base_bdevs_operational": 4, 00:18:09.252 "base_bdevs_list": [ 00:18:09.252 { 00:18:09.252 "name": "BaseBdev1", 00:18:09.252 "uuid": "e1b20fe8-4f27-4fb1-ab2f-eed3dde55914", 00:18:09.252 "is_configured": true, 00:18:09.252 "data_offset": 2048, 00:18:09.252 "data_size": 63488 00:18:09.252 }, 00:18:09.252 { 00:18:09.252 "name": "BaseBdev2", 00:18:09.252 "uuid": "0d15908a-53ef-4c92-a0b3-cbd0f0846d62", 00:18:09.252 "is_configured": true, 00:18:09.252 "data_offset": 2048, 00:18:09.252 "data_size": 63488 00:18:09.252 }, 00:18:09.252 { 00:18:09.252 "name": "BaseBdev3", 00:18:09.252 "uuid": "3ed74450-c38a-4e85-8665-9d5f60e801a5", 00:18:09.252 "is_configured": true, 00:18:09.252 "data_offset": 2048, 00:18:09.252 "data_size": 63488 00:18:09.252 }, 00:18:09.252 { 00:18:09.252 "name": "BaseBdev4", 00:18:09.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.252 "is_configured": false, 00:18:09.252 "data_offset": 0, 00:18:09.252 "data_size": 0 00:18:09.252 } 00:18:09.252 ] 00:18:09.252 }' 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.252 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.512 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:09.512 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.512 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.771 [2024-12-05 20:11:10.970315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:09.771 [2024-12-05 20:11:10.970677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:09.771 [2024-12-05 20:11:10.970729] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:09.771 [2024-12-05 20:11:10.971026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:09.771 BaseBdev4 00:18:09.771 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.771 20:11:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:09.771 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:09.771 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:09.771 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:09.772 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:09.772 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:09.772 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:09.772 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.772 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.772 [2024-12-05 20:11:10.978457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:09.772 [2024-12-05 20:11:10.978515] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:09.772 [2024-12-05 20:11:10.978815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.772 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.772 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:09.772 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.772 20:11:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.772 [ 00:18:09.772 { 00:18:09.772 "name": "BaseBdev4", 00:18:09.772 "aliases": [ 00:18:09.772 "4ad02028-efe8-42ee-8551-89cadd0adb4e" 00:18:09.772 ], 00:18:09.772 "product_name": "Malloc disk", 00:18:09.772 "block_size": 512, 00:18:09.772 "num_blocks": 65536, 00:18:09.772 "uuid": "4ad02028-efe8-42ee-8551-89cadd0adb4e", 00:18:09.772 "assigned_rate_limits": { 00:18:09.772 "rw_ios_per_sec": 0, 00:18:09.772 "rw_mbytes_per_sec": 0, 00:18:09.772 "r_mbytes_per_sec": 0, 00:18:09.772 "w_mbytes_per_sec": 0 00:18:09.772 }, 00:18:09.772 "claimed": true, 00:18:09.772 "claim_type": "exclusive_write", 00:18:09.772 "zoned": false, 00:18:09.772 "supported_io_types": { 00:18:09.772 "read": true, 00:18:09.772 "write": true, 00:18:09.772 "unmap": true, 00:18:09.772 "flush": true, 00:18:09.772 "reset": true, 00:18:09.772 "nvme_admin": false, 00:18:09.772 "nvme_io": false, 00:18:09.772 "nvme_io_md": false, 00:18:09.772 "write_zeroes": true, 00:18:09.772 "zcopy": true, 00:18:09.772 "get_zone_info": false, 00:18:09.772 "zone_management": false, 00:18:09.772 "zone_append": false, 00:18:09.772 "compare": false, 00:18:09.772 "compare_and_write": false, 00:18:09.772 "abort": true, 00:18:09.772 "seek_hole": false, 00:18:09.772 "seek_data": false, 00:18:09.772 "copy": true, 00:18:09.772 "nvme_iov_md": false 00:18:09.772 }, 00:18:09.772 "memory_domains": [ 00:18:09.772 { 00:18:09.772 "dma_device_id": "system", 00:18:09.772 "dma_device_type": 1 00:18:09.772 }, 00:18:09.772 { 00:18:09.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.772 "dma_device_type": 2 00:18:09.772 } 00:18:09.772 ], 00:18:09.772 "driver_specific": {} 00:18:09.772 } 00:18:09.772 ] 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.772 "name": "Existed_Raid", 00:18:09.772 "uuid": "6d159542-12fa-4791-92ce-72a3407eca10", 00:18:09.772 "strip_size_kb": 64, 00:18:09.772 "state": "online", 00:18:09.772 "raid_level": "raid5f", 00:18:09.772 "superblock": true, 00:18:09.772 "num_base_bdevs": 4, 00:18:09.772 "num_base_bdevs_discovered": 4, 00:18:09.772 "num_base_bdevs_operational": 4, 00:18:09.772 "base_bdevs_list": [ 00:18:09.772 { 00:18:09.772 "name": "BaseBdev1", 00:18:09.772 "uuid": "e1b20fe8-4f27-4fb1-ab2f-eed3dde55914", 00:18:09.772 "is_configured": true, 00:18:09.772 "data_offset": 2048, 00:18:09.772 "data_size": 63488 00:18:09.772 }, 00:18:09.772 { 00:18:09.772 "name": "BaseBdev2", 00:18:09.772 "uuid": "0d15908a-53ef-4c92-a0b3-cbd0f0846d62", 00:18:09.772 "is_configured": true, 00:18:09.772 "data_offset": 2048, 00:18:09.772 "data_size": 63488 00:18:09.772 }, 00:18:09.772 { 00:18:09.772 "name": "BaseBdev3", 00:18:09.772 "uuid": "3ed74450-c38a-4e85-8665-9d5f60e801a5", 00:18:09.772 "is_configured": true, 00:18:09.772 "data_offset": 2048, 00:18:09.772 "data_size": 63488 00:18:09.772 }, 00:18:09.772 { 00:18:09.772 "name": "BaseBdev4", 00:18:09.772 "uuid": "4ad02028-efe8-42ee-8551-89cadd0adb4e", 00:18:09.772 "is_configured": true, 00:18:09.772 "data_offset": 2048, 00:18:09.772 "data_size": 63488 00:18:09.772 } 00:18:09.772 ] 00:18:09.772 }' 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.772 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.031 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:10.031 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:10.031 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:10.031 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:10.031 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:10.031 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:10.031 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:10.031 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.031 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.031 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:10.031 [2024-12-05 20:11:11.422322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.031 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.031 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:10.031 "name": "Existed_Raid", 00:18:10.031 "aliases": [ 00:18:10.031 "6d159542-12fa-4791-92ce-72a3407eca10" 00:18:10.031 ], 00:18:10.031 "product_name": "Raid Volume", 00:18:10.031 "block_size": 512, 00:18:10.031 "num_blocks": 190464, 00:18:10.031 "uuid": "6d159542-12fa-4791-92ce-72a3407eca10", 00:18:10.031 "assigned_rate_limits": { 00:18:10.031 "rw_ios_per_sec": 0, 00:18:10.031 "rw_mbytes_per_sec": 0, 00:18:10.031 "r_mbytes_per_sec": 0, 00:18:10.031 "w_mbytes_per_sec": 0 00:18:10.031 }, 00:18:10.031 "claimed": false, 00:18:10.031 "zoned": false, 00:18:10.031 "supported_io_types": { 00:18:10.031 "read": true, 00:18:10.031 "write": true, 00:18:10.031 "unmap": false, 00:18:10.031 "flush": false, 00:18:10.031 "reset": true, 00:18:10.031 "nvme_admin": false, 00:18:10.031 "nvme_io": false, 00:18:10.031 "nvme_io_md": false, 00:18:10.031 "write_zeroes": true, 00:18:10.031 "zcopy": false, 00:18:10.031 "get_zone_info": false, 00:18:10.031 "zone_management": false, 00:18:10.031 "zone_append": false, 00:18:10.031 "compare": false, 00:18:10.031 "compare_and_write": false, 00:18:10.031 "abort": false, 00:18:10.031 "seek_hole": false, 00:18:10.031 "seek_data": false, 00:18:10.031 "copy": false, 00:18:10.031 "nvme_iov_md": false 00:18:10.031 }, 00:18:10.031 "driver_specific": { 00:18:10.031 "raid": { 00:18:10.031 "uuid": "6d159542-12fa-4791-92ce-72a3407eca10", 00:18:10.031 "strip_size_kb": 64, 00:18:10.031 "state": "online", 00:18:10.031 "raid_level": "raid5f", 00:18:10.031 "superblock": true, 00:18:10.031 "num_base_bdevs": 4, 00:18:10.031 "num_base_bdevs_discovered": 4, 00:18:10.031 "num_base_bdevs_operational": 4, 00:18:10.031 "base_bdevs_list": [ 00:18:10.031 { 00:18:10.031 "name": "BaseBdev1", 00:18:10.032 "uuid": "e1b20fe8-4f27-4fb1-ab2f-eed3dde55914", 00:18:10.032 "is_configured": true, 00:18:10.032 "data_offset": 2048, 00:18:10.032 "data_size": 63488 00:18:10.032 }, 00:18:10.032 { 00:18:10.032 "name": "BaseBdev2", 00:18:10.032 "uuid": "0d15908a-53ef-4c92-a0b3-cbd0f0846d62", 00:18:10.032 "is_configured": true, 00:18:10.032 "data_offset": 2048, 00:18:10.032 "data_size": 63488 00:18:10.032 }, 00:18:10.032 { 00:18:10.032 "name": "BaseBdev3", 00:18:10.032 "uuid": "3ed74450-c38a-4e85-8665-9d5f60e801a5", 00:18:10.032 "is_configured": true, 00:18:10.032 "data_offset": 2048, 00:18:10.032 "data_size": 63488 00:18:10.032 }, 00:18:10.032 { 00:18:10.032 "name": "BaseBdev4", 00:18:10.032 "uuid": "4ad02028-efe8-42ee-8551-89cadd0adb4e", 00:18:10.032 "is_configured": true, 00:18:10.032 "data_offset": 2048, 00:18:10.032 "data_size": 63488 00:18:10.032 } 00:18:10.032 ] 00:18:10.032 } 00:18:10.032 } 00:18:10.032 }' 00:18:10.032 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:10.291 BaseBdev2 00:18:10.291 BaseBdev3 00:18:10.291 BaseBdev4' 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.291 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.291 [2024-12-05 20:11:11.689689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.550 "name": "Existed_Raid", 00:18:10.550 "uuid": "6d159542-12fa-4791-92ce-72a3407eca10", 00:18:10.550 "strip_size_kb": 64, 00:18:10.550 "state": "online", 00:18:10.550 "raid_level": "raid5f", 00:18:10.550 "superblock": true, 00:18:10.550 "num_base_bdevs": 4, 00:18:10.550 "num_base_bdevs_discovered": 3, 00:18:10.550 "num_base_bdevs_operational": 3, 00:18:10.550 "base_bdevs_list": [ 00:18:10.550 { 00:18:10.550 "name": null, 00:18:10.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.550 "is_configured": false, 00:18:10.550 "data_offset": 0, 00:18:10.550 "data_size": 63488 00:18:10.550 }, 00:18:10.550 { 00:18:10.550 "name": "BaseBdev2", 00:18:10.550 "uuid": "0d15908a-53ef-4c92-a0b3-cbd0f0846d62", 00:18:10.550 "is_configured": true, 00:18:10.550 "data_offset": 2048, 00:18:10.550 "data_size": 63488 00:18:10.550 }, 00:18:10.550 { 00:18:10.550 "name": "BaseBdev3", 00:18:10.550 "uuid": "3ed74450-c38a-4e85-8665-9d5f60e801a5", 00:18:10.550 "is_configured": true, 00:18:10.550 "data_offset": 2048, 00:18:10.550 "data_size": 63488 00:18:10.550 }, 00:18:10.550 { 00:18:10.550 "name": "BaseBdev4", 00:18:10.550 "uuid": "4ad02028-efe8-42ee-8551-89cadd0adb4e", 00:18:10.550 "is_configured": true, 00:18:10.550 "data_offset": 2048, 00:18:10.550 "data_size": 63488 00:18:10.550 } 00:18:10.550 ] 00:18:10.550 }' 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.550 20:11:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.809 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:10.809 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:10.809 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.809 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.809 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.809 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:10.809 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.809 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:10.809 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:10.809 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:10.809 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.809 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.068 [2024-12-05 20:11:12.245462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:11.068 [2024-12-05 20:11:12.245625] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.068 [2024-12-05 20:11:12.335150] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.068 [2024-12-05 20:11:12.391067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.068 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.326 [2024-12-05 20:11:12.540185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:11.326 [2024-12-05 20:11:12.540234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.326 BaseBdev2 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.326 [ 00:18:11.326 { 00:18:11.326 "name": "BaseBdev2", 00:18:11.326 "aliases": [ 00:18:11.326 "2e7a2429-e2f6-4a58-815e-6599a6957a95" 00:18:11.326 ], 00:18:11.326 "product_name": "Malloc disk", 00:18:11.326 "block_size": 512, 00:18:11.326 "num_blocks": 65536, 00:18:11.326 "uuid": "2e7a2429-e2f6-4a58-815e-6599a6957a95", 00:18:11.326 "assigned_rate_limits": { 00:18:11.326 "rw_ios_per_sec": 0, 00:18:11.326 "rw_mbytes_per_sec": 0, 00:18:11.326 "r_mbytes_per_sec": 0, 00:18:11.326 "w_mbytes_per_sec": 0 00:18:11.326 }, 00:18:11.326 "claimed": false, 00:18:11.326 "zoned": false, 00:18:11.326 "supported_io_types": { 00:18:11.326 "read": true, 00:18:11.326 "write": true, 00:18:11.326 "unmap": true, 00:18:11.326 "flush": true, 00:18:11.326 "reset": true, 00:18:11.326 "nvme_admin": false, 00:18:11.326 "nvme_io": false, 00:18:11.326 "nvme_io_md": false, 00:18:11.326 "write_zeroes": true, 00:18:11.326 "zcopy": true, 00:18:11.326 "get_zone_info": false, 00:18:11.326 "zone_management": false, 00:18:11.326 "zone_append": false, 00:18:11.326 "compare": false, 00:18:11.326 "compare_and_write": false, 00:18:11.326 "abort": true, 00:18:11.326 "seek_hole": false, 00:18:11.326 "seek_data": false, 00:18:11.326 "copy": true, 00:18:11.326 "nvme_iov_md": false 00:18:11.326 }, 00:18:11.326 "memory_domains": [ 00:18:11.326 { 00:18:11.326 "dma_device_id": "system", 00:18:11.326 "dma_device_type": 1 00:18:11.326 }, 00:18:11.326 { 00:18:11.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.326 "dma_device_type": 2 00:18:11.326 } 00:18:11.326 ], 00:18:11.326 "driver_specific": {} 00:18:11.326 } 00:18:11.326 ] 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.326 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.585 BaseBdev3 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.585 [ 00:18:11.585 { 00:18:11.585 "name": "BaseBdev3", 00:18:11.585 "aliases": [ 00:18:11.585 "3563aed0-bf1f-46d5-91de-21f2d7c0a51a" 00:18:11.585 ], 00:18:11.585 "product_name": "Malloc disk", 00:18:11.585 "block_size": 512, 00:18:11.585 "num_blocks": 65536, 00:18:11.585 "uuid": "3563aed0-bf1f-46d5-91de-21f2d7c0a51a", 00:18:11.585 "assigned_rate_limits": { 00:18:11.585 "rw_ios_per_sec": 0, 00:18:11.585 "rw_mbytes_per_sec": 0, 00:18:11.585 "r_mbytes_per_sec": 0, 00:18:11.585 "w_mbytes_per_sec": 0 00:18:11.585 }, 00:18:11.585 "claimed": false, 00:18:11.585 "zoned": false, 00:18:11.585 "supported_io_types": { 00:18:11.585 "read": true, 00:18:11.585 "write": true, 00:18:11.585 "unmap": true, 00:18:11.585 "flush": true, 00:18:11.585 "reset": true, 00:18:11.585 "nvme_admin": false, 00:18:11.585 "nvme_io": false, 00:18:11.585 "nvme_io_md": false, 00:18:11.585 "write_zeroes": true, 00:18:11.585 "zcopy": true, 00:18:11.585 "get_zone_info": false, 00:18:11.585 "zone_management": false, 00:18:11.585 "zone_append": false, 00:18:11.585 "compare": false, 00:18:11.585 "compare_and_write": false, 00:18:11.585 "abort": true, 00:18:11.585 "seek_hole": false, 00:18:11.585 "seek_data": false, 00:18:11.585 "copy": true, 00:18:11.585 "nvme_iov_md": false 00:18:11.585 }, 00:18:11.585 "memory_domains": [ 00:18:11.585 { 00:18:11.585 "dma_device_id": "system", 00:18:11.585 "dma_device_type": 1 00:18:11.585 }, 00:18:11.585 { 00:18:11.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.585 "dma_device_type": 2 00:18:11.585 } 00:18:11.585 ], 00:18:11.585 "driver_specific": {} 00:18:11.585 } 00:18:11.585 ] 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.585 BaseBdev4 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.585 [ 00:18:11.585 { 00:18:11.585 "name": "BaseBdev4", 00:18:11.585 "aliases": [ 00:18:11.585 "1664ddbf-2753-4a72-b1c6-46c97bb73c4b" 00:18:11.585 ], 00:18:11.585 "product_name": "Malloc disk", 00:18:11.585 "block_size": 512, 00:18:11.585 "num_blocks": 65536, 00:18:11.585 "uuid": "1664ddbf-2753-4a72-b1c6-46c97bb73c4b", 00:18:11.585 "assigned_rate_limits": { 00:18:11.585 "rw_ios_per_sec": 0, 00:18:11.585 "rw_mbytes_per_sec": 0, 00:18:11.585 "r_mbytes_per_sec": 0, 00:18:11.585 "w_mbytes_per_sec": 0 00:18:11.585 }, 00:18:11.585 "claimed": false, 00:18:11.585 "zoned": false, 00:18:11.585 "supported_io_types": { 00:18:11.585 "read": true, 00:18:11.585 "write": true, 00:18:11.585 "unmap": true, 00:18:11.585 "flush": true, 00:18:11.585 "reset": true, 00:18:11.585 "nvme_admin": false, 00:18:11.585 "nvme_io": false, 00:18:11.585 "nvme_io_md": false, 00:18:11.585 "write_zeroes": true, 00:18:11.585 "zcopy": true, 00:18:11.585 "get_zone_info": false, 00:18:11.585 "zone_management": false, 00:18:11.585 "zone_append": false, 00:18:11.585 "compare": false, 00:18:11.585 "compare_and_write": false, 00:18:11.585 "abort": true, 00:18:11.585 "seek_hole": false, 00:18:11.585 "seek_data": false, 00:18:11.585 "copy": true, 00:18:11.585 "nvme_iov_md": false 00:18:11.585 }, 00:18:11.585 "memory_domains": [ 00:18:11.585 { 00:18:11.585 "dma_device_id": "system", 00:18:11.585 "dma_device_type": 1 00:18:11.585 }, 00:18:11.585 { 00:18:11.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.585 "dma_device_type": 2 00:18:11.585 } 00:18:11.585 ], 00:18:11.585 "driver_specific": {} 00:18:11.585 } 00:18:11.585 ] 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.585 [2024-12-05 20:11:12.905191] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:11.585 [2024-12-05 20:11:12.905298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:11.585 [2024-12-05 20:11:12.905340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:11.585 [2024-12-05 20:11:12.907120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:11.585 [2024-12-05 20:11:12.907207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.585 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.585 "name": "Existed_Raid", 00:18:11.585 "uuid": "a06e2192-4d2c-4de9-a447-c53ee52181c9", 00:18:11.585 "strip_size_kb": 64, 00:18:11.585 "state": "configuring", 00:18:11.585 "raid_level": "raid5f", 00:18:11.585 "superblock": true, 00:18:11.585 "num_base_bdevs": 4, 00:18:11.585 "num_base_bdevs_discovered": 3, 00:18:11.585 "num_base_bdevs_operational": 4, 00:18:11.585 "base_bdevs_list": [ 00:18:11.585 { 00:18:11.585 "name": "BaseBdev1", 00:18:11.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.585 "is_configured": false, 00:18:11.586 "data_offset": 0, 00:18:11.586 "data_size": 0 00:18:11.586 }, 00:18:11.586 { 00:18:11.586 "name": "BaseBdev2", 00:18:11.586 "uuid": "2e7a2429-e2f6-4a58-815e-6599a6957a95", 00:18:11.586 "is_configured": true, 00:18:11.586 "data_offset": 2048, 00:18:11.586 "data_size": 63488 00:18:11.586 }, 00:18:11.586 { 00:18:11.586 "name": "BaseBdev3", 00:18:11.586 "uuid": "3563aed0-bf1f-46d5-91de-21f2d7c0a51a", 00:18:11.586 "is_configured": true, 00:18:11.586 "data_offset": 2048, 00:18:11.586 "data_size": 63488 00:18:11.586 }, 00:18:11.586 { 00:18:11.586 "name": "BaseBdev4", 00:18:11.586 "uuid": "1664ddbf-2753-4a72-b1c6-46c97bb73c4b", 00:18:11.586 "is_configured": true, 00:18:11.586 "data_offset": 2048, 00:18:11.586 "data_size": 63488 00:18:11.586 } 00:18:11.586 ] 00:18:11.586 }' 00:18:11.586 20:11:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.586 20:11:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.153 [2024-12-05 20:11:13.356443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.153 "name": "Existed_Raid", 00:18:12.153 "uuid": "a06e2192-4d2c-4de9-a447-c53ee52181c9", 00:18:12.153 "strip_size_kb": 64, 00:18:12.153 "state": "configuring", 00:18:12.153 "raid_level": "raid5f", 00:18:12.153 "superblock": true, 00:18:12.153 "num_base_bdevs": 4, 00:18:12.153 "num_base_bdevs_discovered": 2, 00:18:12.153 "num_base_bdevs_operational": 4, 00:18:12.153 "base_bdevs_list": [ 00:18:12.153 { 00:18:12.153 "name": "BaseBdev1", 00:18:12.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.153 "is_configured": false, 00:18:12.153 "data_offset": 0, 00:18:12.153 "data_size": 0 00:18:12.153 }, 00:18:12.153 { 00:18:12.153 "name": null, 00:18:12.153 "uuid": "2e7a2429-e2f6-4a58-815e-6599a6957a95", 00:18:12.153 "is_configured": false, 00:18:12.153 "data_offset": 0, 00:18:12.153 "data_size": 63488 00:18:12.153 }, 00:18:12.153 { 00:18:12.153 "name": "BaseBdev3", 00:18:12.153 "uuid": "3563aed0-bf1f-46d5-91de-21f2d7c0a51a", 00:18:12.153 "is_configured": true, 00:18:12.153 "data_offset": 2048, 00:18:12.153 "data_size": 63488 00:18:12.153 }, 00:18:12.153 { 00:18:12.153 "name": "BaseBdev4", 00:18:12.153 "uuid": "1664ddbf-2753-4a72-b1c6-46c97bb73c4b", 00:18:12.153 "is_configured": true, 00:18:12.153 "data_offset": 2048, 00:18:12.153 "data_size": 63488 00:18:12.153 } 00:18:12.153 ] 00:18:12.153 }' 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.153 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.411 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.411 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.411 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.411 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:12.411 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.669 [2024-12-05 20:11:13.890279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.669 BaseBdev1 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.669 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.669 [ 00:18:12.669 { 00:18:12.669 "name": "BaseBdev1", 00:18:12.669 "aliases": [ 00:18:12.669 "83e73f80-dd91-4287-9d5f-7188f0ab88b7" 00:18:12.669 ], 00:18:12.669 "product_name": "Malloc disk", 00:18:12.669 "block_size": 512, 00:18:12.669 "num_blocks": 65536, 00:18:12.669 "uuid": "83e73f80-dd91-4287-9d5f-7188f0ab88b7", 00:18:12.669 "assigned_rate_limits": { 00:18:12.669 "rw_ios_per_sec": 0, 00:18:12.669 "rw_mbytes_per_sec": 0, 00:18:12.669 "r_mbytes_per_sec": 0, 00:18:12.669 "w_mbytes_per_sec": 0 00:18:12.669 }, 00:18:12.669 "claimed": true, 00:18:12.669 "claim_type": "exclusive_write", 00:18:12.669 "zoned": false, 00:18:12.669 "supported_io_types": { 00:18:12.669 "read": true, 00:18:12.669 "write": true, 00:18:12.669 "unmap": true, 00:18:12.669 "flush": true, 00:18:12.669 "reset": true, 00:18:12.669 "nvme_admin": false, 00:18:12.669 "nvme_io": false, 00:18:12.669 "nvme_io_md": false, 00:18:12.669 "write_zeroes": true, 00:18:12.669 "zcopy": true, 00:18:12.669 "get_zone_info": false, 00:18:12.669 "zone_management": false, 00:18:12.669 "zone_append": false, 00:18:12.669 "compare": false, 00:18:12.669 "compare_and_write": false, 00:18:12.669 "abort": true, 00:18:12.669 "seek_hole": false, 00:18:12.669 "seek_data": false, 00:18:12.669 "copy": true, 00:18:12.669 "nvme_iov_md": false 00:18:12.669 }, 00:18:12.669 "memory_domains": [ 00:18:12.669 { 00:18:12.669 "dma_device_id": "system", 00:18:12.669 "dma_device_type": 1 00:18:12.669 }, 00:18:12.669 { 00:18:12.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.669 "dma_device_type": 2 00:18:12.669 } 00:18:12.669 ], 00:18:12.670 "driver_specific": {} 00:18:12.670 } 00:18:12.670 ] 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.670 "name": "Existed_Raid", 00:18:12.670 "uuid": "a06e2192-4d2c-4de9-a447-c53ee52181c9", 00:18:12.670 "strip_size_kb": 64, 00:18:12.670 "state": "configuring", 00:18:12.670 "raid_level": "raid5f", 00:18:12.670 "superblock": true, 00:18:12.670 "num_base_bdevs": 4, 00:18:12.670 "num_base_bdevs_discovered": 3, 00:18:12.670 "num_base_bdevs_operational": 4, 00:18:12.670 "base_bdevs_list": [ 00:18:12.670 { 00:18:12.670 "name": "BaseBdev1", 00:18:12.670 "uuid": "83e73f80-dd91-4287-9d5f-7188f0ab88b7", 00:18:12.670 "is_configured": true, 00:18:12.670 "data_offset": 2048, 00:18:12.670 "data_size": 63488 00:18:12.670 }, 00:18:12.670 { 00:18:12.670 "name": null, 00:18:12.670 "uuid": "2e7a2429-e2f6-4a58-815e-6599a6957a95", 00:18:12.670 "is_configured": false, 00:18:12.670 "data_offset": 0, 00:18:12.670 "data_size": 63488 00:18:12.670 }, 00:18:12.670 { 00:18:12.670 "name": "BaseBdev3", 00:18:12.670 "uuid": "3563aed0-bf1f-46d5-91de-21f2d7c0a51a", 00:18:12.670 "is_configured": true, 00:18:12.670 "data_offset": 2048, 00:18:12.670 "data_size": 63488 00:18:12.670 }, 00:18:12.670 { 00:18:12.670 "name": "BaseBdev4", 00:18:12.670 "uuid": "1664ddbf-2753-4a72-b1c6-46c97bb73c4b", 00:18:12.670 "is_configured": true, 00:18:12.670 "data_offset": 2048, 00:18:12.670 "data_size": 63488 00:18:12.670 } 00:18:12.670 ] 00:18:12.670 }' 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.670 20:11:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.236 [2024-12-05 20:11:14.417446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.236 "name": "Existed_Raid", 00:18:13.236 "uuid": "a06e2192-4d2c-4de9-a447-c53ee52181c9", 00:18:13.236 "strip_size_kb": 64, 00:18:13.236 "state": "configuring", 00:18:13.236 "raid_level": "raid5f", 00:18:13.236 "superblock": true, 00:18:13.236 "num_base_bdevs": 4, 00:18:13.236 "num_base_bdevs_discovered": 2, 00:18:13.236 "num_base_bdevs_operational": 4, 00:18:13.236 "base_bdevs_list": [ 00:18:13.236 { 00:18:13.236 "name": "BaseBdev1", 00:18:13.236 "uuid": "83e73f80-dd91-4287-9d5f-7188f0ab88b7", 00:18:13.236 "is_configured": true, 00:18:13.236 "data_offset": 2048, 00:18:13.236 "data_size": 63488 00:18:13.236 }, 00:18:13.236 { 00:18:13.236 "name": null, 00:18:13.236 "uuid": "2e7a2429-e2f6-4a58-815e-6599a6957a95", 00:18:13.236 "is_configured": false, 00:18:13.236 "data_offset": 0, 00:18:13.236 "data_size": 63488 00:18:13.236 }, 00:18:13.236 { 00:18:13.236 "name": null, 00:18:13.236 "uuid": "3563aed0-bf1f-46d5-91de-21f2d7c0a51a", 00:18:13.236 "is_configured": false, 00:18:13.236 "data_offset": 0, 00:18:13.236 "data_size": 63488 00:18:13.236 }, 00:18:13.236 { 00:18:13.236 "name": "BaseBdev4", 00:18:13.236 "uuid": "1664ddbf-2753-4a72-b1c6-46c97bb73c4b", 00:18:13.236 "is_configured": true, 00:18:13.236 "data_offset": 2048, 00:18:13.236 "data_size": 63488 00:18:13.236 } 00:18:13.236 ] 00:18:13.236 }' 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.236 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.495 [2024-12-05 20:11:14.904574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.495 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.757 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.757 "name": "Existed_Raid", 00:18:13.757 "uuid": "a06e2192-4d2c-4de9-a447-c53ee52181c9", 00:18:13.758 "strip_size_kb": 64, 00:18:13.758 "state": "configuring", 00:18:13.758 "raid_level": "raid5f", 00:18:13.758 "superblock": true, 00:18:13.758 "num_base_bdevs": 4, 00:18:13.758 "num_base_bdevs_discovered": 3, 00:18:13.758 "num_base_bdevs_operational": 4, 00:18:13.758 "base_bdevs_list": [ 00:18:13.758 { 00:18:13.758 "name": "BaseBdev1", 00:18:13.758 "uuid": "83e73f80-dd91-4287-9d5f-7188f0ab88b7", 00:18:13.758 "is_configured": true, 00:18:13.758 "data_offset": 2048, 00:18:13.758 "data_size": 63488 00:18:13.758 }, 00:18:13.758 { 00:18:13.758 "name": null, 00:18:13.758 "uuid": "2e7a2429-e2f6-4a58-815e-6599a6957a95", 00:18:13.758 "is_configured": false, 00:18:13.758 "data_offset": 0, 00:18:13.758 "data_size": 63488 00:18:13.758 }, 00:18:13.758 { 00:18:13.758 "name": "BaseBdev3", 00:18:13.758 "uuid": "3563aed0-bf1f-46d5-91de-21f2d7c0a51a", 00:18:13.758 "is_configured": true, 00:18:13.758 "data_offset": 2048, 00:18:13.758 "data_size": 63488 00:18:13.758 }, 00:18:13.758 { 00:18:13.758 "name": "BaseBdev4", 00:18:13.758 "uuid": "1664ddbf-2753-4a72-b1c6-46c97bb73c4b", 00:18:13.758 "is_configured": true, 00:18:13.758 "data_offset": 2048, 00:18:13.758 "data_size": 63488 00:18:13.758 } 00:18:13.758 ] 00:18:13.758 }' 00:18:13.758 20:11:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.758 20:11:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.022 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:14.022 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.022 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.022 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.022 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.022 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:14.022 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:14.022 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.022 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.022 [2024-12-05 20:11:15.399752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:14.280 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.280 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:14.280 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.280 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.280 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.280 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.280 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:14.280 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.280 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.280 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.280 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.280 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.280 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.280 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.280 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.280 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.280 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.280 "name": "Existed_Raid", 00:18:14.280 "uuid": "a06e2192-4d2c-4de9-a447-c53ee52181c9", 00:18:14.280 "strip_size_kb": 64, 00:18:14.280 "state": "configuring", 00:18:14.280 "raid_level": "raid5f", 00:18:14.280 "superblock": true, 00:18:14.280 "num_base_bdevs": 4, 00:18:14.280 "num_base_bdevs_discovered": 2, 00:18:14.280 "num_base_bdevs_operational": 4, 00:18:14.280 "base_bdevs_list": [ 00:18:14.280 { 00:18:14.280 "name": null, 00:18:14.280 "uuid": "83e73f80-dd91-4287-9d5f-7188f0ab88b7", 00:18:14.280 "is_configured": false, 00:18:14.280 "data_offset": 0, 00:18:14.280 "data_size": 63488 00:18:14.280 }, 00:18:14.280 { 00:18:14.280 "name": null, 00:18:14.280 "uuid": "2e7a2429-e2f6-4a58-815e-6599a6957a95", 00:18:14.280 "is_configured": false, 00:18:14.280 "data_offset": 0, 00:18:14.280 "data_size": 63488 00:18:14.280 }, 00:18:14.280 { 00:18:14.280 "name": "BaseBdev3", 00:18:14.280 "uuid": "3563aed0-bf1f-46d5-91de-21f2d7c0a51a", 00:18:14.281 "is_configured": true, 00:18:14.281 "data_offset": 2048, 00:18:14.281 "data_size": 63488 00:18:14.281 }, 00:18:14.281 { 00:18:14.281 "name": "BaseBdev4", 00:18:14.281 "uuid": "1664ddbf-2753-4a72-b1c6-46c97bb73c4b", 00:18:14.281 "is_configured": true, 00:18:14.281 "data_offset": 2048, 00:18:14.281 "data_size": 63488 00:18:14.281 } 00:18:14.281 ] 00:18:14.281 }' 00:18:14.281 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.281 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.539 [2024-12-05 20:11:15.947732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.539 20:11:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.799 20:11:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.799 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.799 "name": "Existed_Raid", 00:18:14.799 "uuid": "a06e2192-4d2c-4de9-a447-c53ee52181c9", 00:18:14.799 "strip_size_kb": 64, 00:18:14.799 "state": "configuring", 00:18:14.799 "raid_level": "raid5f", 00:18:14.799 "superblock": true, 00:18:14.799 "num_base_bdevs": 4, 00:18:14.799 "num_base_bdevs_discovered": 3, 00:18:14.799 "num_base_bdevs_operational": 4, 00:18:14.799 "base_bdevs_list": [ 00:18:14.799 { 00:18:14.799 "name": null, 00:18:14.799 "uuid": "83e73f80-dd91-4287-9d5f-7188f0ab88b7", 00:18:14.799 "is_configured": false, 00:18:14.799 "data_offset": 0, 00:18:14.799 "data_size": 63488 00:18:14.799 }, 00:18:14.799 { 00:18:14.799 "name": "BaseBdev2", 00:18:14.799 "uuid": "2e7a2429-e2f6-4a58-815e-6599a6957a95", 00:18:14.799 "is_configured": true, 00:18:14.799 "data_offset": 2048, 00:18:14.799 "data_size": 63488 00:18:14.799 }, 00:18:14.799 { 00:18:14.799 "name": "BaseBdev3", 00:18:14.799 "uuid": "3563aed0-bf1f-46d5-91de-21f2d7c0a51a", 00:18:14.799 "is_configured": true, 00:18:14.799 "data_offset": 2048, 00:18:14.799 "data_size": 63488 00:18:14.799 }, 00:18:14.799 { 00:18:14.799 "name": "BaseBdev4", 00:18:14.799 "uuid": "1664ddbf-2753-4a72-b1c6-46c97bb73c4b", 00:18:14.799 "is_configured": true, 00:18:14.799 "data_offset": 2048, 00:18:14.799 "data_size": 63488 00:18:14.799 } 00:18:14.799 ] 00:18:14.799 }' 00:18:14.799 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.799 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.059 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.059 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.059 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:15.059 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.059 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.059 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:15.059 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.059 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:15.059 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.059 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.059 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.319 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 83e73f80-dd91-4287-9d5f-7188f0ab88b7 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.320 [2024-12-05 20:11:16.535550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:15.320 [2024-12-05 20:11:16.535769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:15.320 [2024-12-05 20:11:16.535782] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:15.320 [2024-12-05 20:11:16.536062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:15.320 NewBaseBdev 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.320 [2024-12-05 20:11:16.543356] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:15.320 [2024-12-05 20:11:16.543379] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:15.320 [2024-12-05 20:11:16.543606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.320 [ 00:18:15.320 { 00:18:15.320 "name": "NewBaseBdev", 00:18:15.320 "aliases": [ 00:18:15.320 "83e73f80-dd91-4287-9d5f-7188f0ab88b7" 00:18:15.320 ], 00:18:15.320 "product_name": "Malloc disk", 00:18:15.320 "block_size": 512, 00:18:15.320 "num_blocks": 65536, 00:18:15.320 "uuid": "83e73f80-dd91-4287-9d5f-7188f0ab88b7", 00:18:15.320 "assigned_rate_limits": { 00:18:15.320 "rw_ios_per_sec": 0, 00:18:15.320 "rw_mbytes_per_sec": 0, 00:18:15.320 "r_mbytes_per_sec": 0, 00:18:15.320 "w_mbytes_per_sec": 0 00:18:15.320 }, 00:18:15.320 "claimed": true, 00:18:15.320 "claim_type": "exclusive_write", 00:18:15.320 "zoned": false, 00:18:15.320 "supported_io_types": { 00:18:15.320 "read": true, 00:18:15.320 "write": true, 00:18:15.320 "unmap": true, 00:18:15.320 "flush": true, 00:18:15.320 "reset": true, 00:18:15.320 "nvme_admin": false, 00:18:15.320 "nvme_io": false, 00:18:15.320 "nvme_io_md": false, 00:18:15.320 "write_zeroes": true, 00:18:15.320 "zcopy": true, 00:18:15.320 "get_zone_info": false, 00:18:15.320 "zone_management": false, 00:18:15.320 "zone_append": false, 00:18:15.320 "compare": false, 00:18:15.320 "compare_and_write": false, 00:18:15.320 "abort": true, 00:18:15.320 "seek_hole": false, 00:18:15.320 "seek_data": false, 00:18:15.320 "copy": true, 00:18:15.320 "nvme_iov_md": false 00:18:15.320 }, 00:18:15.320 "memory_domains": [ 00:18:15.320 { 00:18:15.320 "dma_device_id": "system", 00:18:15.320 "dma_device_type": 1 00:18:15.320 }, 00:18:15.320 { 00:18:15.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.320 "dma_device_type": 2 00:18:15.320 } 00:18:15.320 ], 00:18:15.320 "driver_specific": {} 00:18:15.320 } 00:18:15.320 ] 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.320 "name": "Existed_Raid", 00:18:15.320 "uuid": "a06e2192-4d2c-4de9-a447-c53ee52181c9", 00:18:15.320 "strip_size_kb": 64, 00:18:15.320 "state": "online", 00:18:15.320 "raid_level": "raid5f", 00:18:15.320 "superblock": true, 00:18:15.320 "num_base_bdevs": 4, 00:18:15.320 "num_base_bdevs_discovered": 4, 00:18:15.320 "num_base_bdevs_operational": 4, 00:18:15.320 "base_bdevs_list": [ 00:18:15.320 { 00:18:15.320 "name": "NewBaseBdev", 00:18:15.320 "uuid": "83e73f80-dd91-4287-9d5f-7188f0ab88b7", 00:18:15.320 "is_configured": true, 00:18:15.320 "data_offset": 2048, 00:18:15.320 "data_size": 63488 00:18:15.320 }, 00:18:15.320 { 00:18:15.320 "name": "BaseBdev2", 00:18:15.320 "uuid": "2e7a2429-e2f6-4a58-815e-6599a6957a95", 00:18:15.320 "is_configured": true, 00:18:15.320 "data_offset": 2048, 00:18:15.320 "data_size": 63488 00:18:15.320 }, 00:18:15.320 { 00:18:15.320 "name": "BaseBdev3", 00:18:15.320 "uuid": "3563aed0-bf1f-46d5-91de-21f2d7c0a51a", 00:18:15.320 "is_configured": true, 00:18:15.320 "data_offset": 2048, 00:18:15.320 "data_size": 63488 00:18:15.320 }, 00:18:15.320 { 00:18:15.320 "name": "BaseBdev4", 00:18:15.320 "uuid": "1664ddbf-2753-4a72-b1c6-46c97bb73c4b", 00:18:15.320 "is_configured": true, 00:18:15.320 "data_offset": 2048, 00:18:15.320 "data_size": 63488 00:18:15.320 } 00:18:15.320 ] 00:18:15.320 }' 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.320 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.580 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:15.580 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:15.580 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:15.580 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:15.580 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:15.580 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:15.580 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:15.580 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.580 20:11:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.580 20:11:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:15.580 [2024-12-05 20:11:17.003248] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.580 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.840 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:15.840 "name": "Existed_Raid", 00:18:15.840 "aliases": [ 00:18:15.840 "a06e2192-4d2c-4de9-a447-c53ee52181c9" 00:18:15.840 ], 00:18:15.840 "product_name": "Raid Volume", 00:18:15.840 "block_size": 512, 00:18:15.840 "num_blocks": 190464, 00:18:15.840 "uuid": "a06e2192-4d2c-4de9-a447-c53ee52181c9", 00:18:15.840 "assigned_rate_limits": { 00:18:15.840 "rw_ios_per_sec": 0, 00:18:15.840 "rw_mbytes_per_sec": 0, 00:18:15.840 "r_mbytes_per_sec": 0, 00:18:15.840 "w_mbytes_per_sec": 0 00:18:15.840 }, 00:18:15.840 "claimed": false, 00:18:15.840 "zoned": false, 00:18:15.840 "supported_io_types": { 00:18:15.840 "read": true, 00:18:15.840 "write": true, 00:18:15.840 "unmap": false, 00:18:15.840 "flush": false, 00:18:15.840 "reset": true, 00:18:15.840 "nvme_admin": false, 00:18:15.840 "nvme_io": false, 00:18:15.840 "nvme_io_md": false, 00:18:15.840 "write_zeroes": true, 00:18:15.840 "zcopy": false, 00:18:15.840 "get_zone_info": false, 00:18:15.840 "zone_management": false, 00:18:15.840 "zone_append": false, 00:18:15.840 "compare": false, 00:18:15.840 "compare_and_write": false, 00:18:15.840 "abort": false, 00:18:15.840 "seek_hole": false, 00:18:15.840 "seek_data": false, 00:18:15.840 "copy": false, 00:18:15.840 "nvme_iov_md": false 00:18:15.840 }, 00:18:15.840 "driver_specific": { 00:18:15.840 "raid": { 00:18:15.840 "uuid": "a06e2192-4d2c-4de9-a447-c53ee52181c9", 00:18:15.840 "strip_size_kb": 64, 00:18:15.840 "state": "online", 00:18:15.840 "raid_level": "raid5f", 00:18:15.840 "superblock": true, 00:18:15.840 "num_base_bdevs": 4, 00:18:15.840 "num_base_bdevs_discovered": 4, 00:18:15.840 "num_base_bdevs_operational": 4, 00:18:15.840 "base_bdevs_list": [ 00:18:15.840 { 00:18:15.840 "name": "NewBaseBdev", 00:18:15.840 "uuid": "83e73f80-dd91-4287-9d5f-7188f0ab88b7", 00:18:15.840 "is_configured": true, 00:18:15.840 "data_offset": 2048, 00:18:15.840 "data_size": 63488 00:18:15.840 }, 00:18:15.840 { 00:18:15.840 "name": "BaseBdev2", 00:18:15.840 "uuid": "2e7a2429-e2f6-4a58-815e-6599a6957a95", 00:18:15.840 "is_configured": true, 00:18:15.840 "data_offset": 2048, 00:18:15.840 "data_size": 63488 00:18:15.840 }, 00:18:15.840 { 00:18:15.840 "name": "BaseBdev3", 00:18:15.840 "uuid": "3563aed0-bf1f-46d5-91de-21f2d7c0a51a", 00:18:15.840 "is_configured": true, 00:18:15.840 "data_offset": 2048, 00:18:15.840 "data_size": 63488 00:18:15.840 }, 00:18:15.840 { 00:18:15.840 "name": "BaseBdev4", 00:18:15.840 "uuid": "1664ddbf-2753-4a72-b1c6-46c97bb73c4b", 00:18:15.840 "is_configured": true, 00:18:15.840 "data_offset": 2048, 00:18:15.840 "data_size": 63488 00:18:15.840 } 00:18:15.840 ] 00:18:15.840 } 00:18:15.840 } 00:18:15.840 }' 00:18:15.840 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:15.840 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:15.840 BaseBdev2 00:18:15.840 BaseBdev3 00:18:15.840 BaseBdev4' 00:18:15.840 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.840 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:15.840 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.840 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:15.840 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.840 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.840 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.840 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.840 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:15.840 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.841 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.100 [2024-12-05 20:11:17.326502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:16.100 [2024-12-05 20:11:17.326570] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.100 [2024-12-05 20:11:17.326656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.100 [2024-12-05 20:11:17.326968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.100 [2024-12-05 20:11:17.327022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83518 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83518 ']' 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83518 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83518 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83518' 00:18:16.100 killing process with pid 83518 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83518 00:18:16.100 [2024-12-05 20:11:17.365836] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:16.100 20:11:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83518 00:18:16.359 [2024-12-05 20:11:17.740897] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:17.737 20:11:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:17.737 00:18:17.737 real 0m11.303s 00:18:17.737 user 0m18.015s 00:18:17.737 sys 0m2.009s 00:18:17.737 20:11:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.737 20:11:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.737 ************************************ 00:18:17.737 END TEST raid5f_state_function_test_sb 00:18:17.737 ************************************ 00:18:17.737 20:11:18 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:18:17.737 20:11:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:17.737 20:11:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:17.737 20:11:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.737 ************************************ 00:18:17.737 START TEST raid5f_superblock_test 00:18:17.737 ************************************ 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84187 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84187 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84187 ']' 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.737 20:11:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.737 [2024-12-05 20:11:18.950656] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:18:17.737 [2024-12-05 20:11:18.950864] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84187 ] 00:18:17.737 [2024-12-05 20:11:19.123472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.997 [2024-12-05 20:11:19.230248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.997 [2024-12-05 20:11:19.415145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.997 [2024-12-05 20:11:19.415182] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.566 malloc1 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.566 [2024-12-05 20:11:19.809165] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:18.566 [2024-12-05 20:11:19.809264] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.566 [2024-12-05 20:11:19.809319] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:18.566 [2024-12-05 20:11:19.809347] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.566 [2024-12-05 20:11:19.811368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.566 [2024-12-05 20:11:19.811452] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:18.566 pt1 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:18.566 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.567 malloc2 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.567 [2024-12-05 20:11:19.866750] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:18.567 [2024-12-05 20:11:19.866853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.567 [2024-12-05 20:11:19.866895] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:18.567 [2024-12-05 20:11:19.866933] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.567 [2024-12-05 20:11:19.869015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.567 [2024-12-05 20:11:19.869061] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:18.567 pt2 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.567 malloc3 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.567 [2024-12-05 20:11:19.931176] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:18.567 [2024-12-05 20:11:19.931263] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.567 [2024-12-05 20:11:19.931318] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:18.567 [2024-12-05 20:11:19.931347] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.567 [2024-12-05 20:11:19.933443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.567 [2024-12-05 20:11:19.933510] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:18.567 pt3 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.567 malloc4 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.567 [2024-12-05 20:11:19.989209] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:18.567 [2024-12-05 20:11:19.989303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.567 [2024-12-05 20:11:19.989342] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:18.567 [2024-12-05 20:11:19.989370] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.567 [2024-12-05 20:11:19.991377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.567 [2024-12-05 20:11:19.991459] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:18.567 pt4 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.567 20:11:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.567 [2024-12-05 20:11:20.001220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:18.827 [2024-12-05 20:11:20.002988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:18.827 [2024-12-05 20:11:20.003111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:18.827 [2024-12-05 20:11:20.003182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:18.827 [2024-12-05 20:11:20.003383] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:18.827 [2024-12-05 20:11:20.003431] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:18.827 [2024-12-05 20:11:20.003680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:18.827 [2024-12-05 20:11:20.010992] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:18.827 [2024-12-05 20:11:20.011045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:18.827 [2024-12-05 20:11:20.011249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.827 "name": "raid_bdev1", 00:18:18.827 "uuid": "38c37107-94ff-4730-8885-ebe376c38df8", 00:18:18.827 "strip_size_kb": 64, 00:18:18.827 "state": "online", 00:18:18.827 "raid_level": "raid5f", 00:18:18.827 "superblock": true, 00:18:18.827 "num_base_bdevs": 4, 00:18:18.827 "num_base_bdevs_discovered": 4, 00:18:18.827 "num_base_bdevs_operational": 4, 00:18:18.827 "base_bdevs_list": [ 00:18:18.827 { 00:18:18.827 "name": "pt1", 00:18:18.827 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:18.827 "is_configured": true, 00:18:18.827 "data_offset": 2048, 00:18:18.827 "data_size": 63488 00:18:18.827 }, 00:18:18.827 { 00:18:18.827 "name": "pt2", 00:18:18.827 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:18.827 "is_configured": true, 00:18:18.827 "data_offset": 2048, 00:18:18.827 "data_size": 63488 00:18:18.827 }, 00:18:18.827 { 00:18:18.827 "name": "pt3", 00:18:18.827 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:18.827 "is_configured": true, 00:18:18.827 "data_offset": 2048, 00:18:18.827 "data_size": 63488 00:18:18.827 }, 00:18:18.827 { 00:18:18.827 "name": "pt4", 00:18:18.827 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:18.827 "is_configured": true, 00:18:18.827 "data_offset": 2048, 00:18:18.827 "data_size": 63488 00:18:18.827 } 00:18:18.827 ] 00:18:18.827 }' 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.827 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.087 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:19.087 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:19.087 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:19.087 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:19.087 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:19.087 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:19.087 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:19.087 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.087 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.087 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:19.087 [2024-12-05 20:11:20.474850] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.087 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.087 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:19.087 "name": "raid_bdev1", 00:18:19.087 "aliases": [ 00:18:19.087 "38c37107-94ff-4730-8885-ebe376c38df8" 00:18:19.087 ], 00:18:19.087 "product_name": "Raid Volume", 00:18:19.087 "block_size": 512, 00:18:19.087 "num_blocks": 190464, 00:18:19.087 "uuid": "38c37107-94ff-4730-8885-ebe376c38df8", 00:18:19.087 "assigned_rate_limits": { 00:18:19.087 "rw_ios_per_sec": 0, 00:18:19.087 "rw_mbytes_per_sec": 0, 00:18:19.087 "r_mbytes_per_sec": 0, 00:18:19.087 "w_mbytes_per_sec": 0 00:18:19.087 }, 00:18:19.087 "claimed": false, 00:18:19.087 "zoned": false, 00:18:19.087 "supported_io_types": { 00:18:19.087 "read": true, 00:18:19.087 "write": true, 00:18:19.087 "unmap": false, 00:18:19.087 "flush": false, 00:18:19.087 "reset": true, 00:18:19.087 "nvme_admin": false, 00:18:19.087 "nvme_io": false, 00:18:19.087 "nvme_io_md": false, 00:18:19.087 "write_zeroes": true, 00:18:19.087 "zcopy": false, 00:18:19.087 "get_zone_info": false, 00:18:19.087 "zone_management": false, 00:18:19.087 "zone_append": false, 00:18:19.087 "compare": false, 00:18:19.087 "compare_and_write": false, 00:18:19.087 "abort": false, 00:18:19.087 "seek_hole": false, 00:18:19.087 "seek_data": false, 00:18:19.087 "copy": false, 00:18:19.087 "nvme_iov_md": false 00:18:19.087 }, 00:18:19.087 "driver_specific": { 00:18:19.087 "raid": { 00:18:19.087 "uuid": "38c37107-94ff-4730-8885-ebe376c38df8", 00:18:19.087 "strip_size_kb": 64, 00:18:19.087 "state": "online", 00:18:19.087 "raid_level": "raid5f", 00:18:19.087 "superblock": true, 00:18:19.087 "num_base_bdevs": 4, 00:18:19.087 "num_base_bdevs_discovered": 4, 00:18:19.087 "num_base_bdevs_operational": 4, 00:18:19.087 "base_bdevs_list": [ 00:18:19.087 { 00:18:19.087 "name": "pt1", 00:18:19.087 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.087 "is_configured": true, 00:18:19.087 "data_offset": 2048, 00:18:19.087 "data_size": 63488 00:18:19.087 }, 00:18:19.087 { 00:18:19.087 "name": "pt2", 00:18:19.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.087 "is_configured": true, 00:18:19.087 "data_offset": 2048, 00:18:19.087 "data_size": 63488 00:18:19.087 }, 00:18:19.087 { 00:18:19.087 "name": "pt3", 00:18:19.087 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:19.087 "is_configured": true, 00:18:19.087 "data_offset": 2048, 00:18:19.087 "data_size": 63488 00:18:19.087 }, 00:18:19.087 { 00:18:19.087 "name": "pt4", 00:18:19.087 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:19.087 "is_configured": true, 00:18:19.087 "data_offset": 2048, 00:18:19.087 "data_size": 63488 00:18:19.087 } 00:18:19.087 ] 00:18:19.087 } 00:18:19.087 } 00:18:19.087 }' 00:18:19.087 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:19.347 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:19.347 pt2 00:18:19.347 pt3 00:18:19.347 pt4' 00:18:19.347 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.347 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:19.347 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.347 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:19.347 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.347 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.347 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.347 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.347 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.347 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.347 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.347 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:19.347 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.348 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.608 [2024-12-05 20:11:20.806250] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=38c37107-94ff-4730-8885-ebe376c38df8 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 38c37107-94ff-4730-8885-ebe376c38df8 ']' 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.608 [2024-12-05 20:11:20.834046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:19.608 [2024-12-05 20:11:20.834069] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.608 [2024-12-05 20:11:20.834138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.608 [2024-12-05 20:11:20.834216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:19.608 [2024-12-05 20:11:20.834229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.608 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.608 [2024-12-05 20:11:20.981817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:19.608 [2024-12-05 20:11:20.983557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:19.608 [2024-12-05 20:11:20.983660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:19.608 [2024-12-05 20:11:20.983711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:19.608 [2024-12-05 20:11:20.983790] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:19.608 [2024-12-05 20:11:20.983872] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:19.608 [2024-12-05 20:11:20.983956] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:19.608 [2024-12-05 20:11:20.983978] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:19.608 [2024-12-05 20:11:20.983990] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:19.608 [2024-12-05 20:11:20.984001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:19.608 request: 00:18:19.608 { 00:18:19.608 "name": "raid_bdev1", 00:18:19.608 "raid_level": "raid5f", 00:18:19.608 "base_bdevs": [ 00:18:19.608 "malloc1", 00:18:19.608 "malloc2", 00:18:19.608 "malloc3", 00:18:19.608 "malloc4" 00:18:19.608 ], 00:18:19.608 "strip_size_kb": 64, 00:18:19.608 "superblock": false, 00:18:19.608 "method": "bdev_raid_create", 00:18:19.608 "req_id": 1 00:18:19.608 } 00:18:19.608 Got JSON-RPC error response 00:18:19.608 response: 00:18:19.608 { 00:18:19.608 "code": -17, 00:18:19.608 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:19.609 } 00:18:19.609 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:19.609 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:18:19.609 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:19.609 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:19.609 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:19.609 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.609 20:11:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:19.609 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.609 20:11:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.609 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.609 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:19.609 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:19.609 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:19.609 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.609 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.869 [2024-12-05 20:11:21.049689] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:19.869 [2024-12-05 20:11:21.049782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.869 [2024-12-05 20:11:21.049834] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:19.869 [2024-12-05 20:11:21.049868] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.869 [2024-12-05 20:11:21.051977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.869 [2024-12-05 20:11:21.052046] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:19.869 [2024-12-05 20:11:21.052141] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:19.869 [2024-12-05 20:11:21.052254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:19.869 pt1 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.869 "name": "raid_bdev1", 00:18:19.869 "uuid": "38c37107-94ff-4730-8885-ebe376c38df8", 00:18:19.869 "strip_size_kb": 64, 00:18:19.869 "state": "configuring", 00:18:19.869 "raid_level": "raid5f", 00:18:19.869 "superblock": true, 00:18:19.869 "num_base_bdevs": 4, 00:18:19.869 "num_base_bdevs_discovered": 1, 00:18:19.869 "num_base_bdevs_operational": 4, 00:18:19.869 "base_bdevs_list": [ 00:18:19.869 { 00:18:19.869 "name": "pt1", 00:18:19.869 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.869 "is_configured": true, 00:18:19.869 "data_offset": 2048, 00:18:19.869 "data_size": 63488 00:18:19.869 }, 00:18:19.869 { 00:18:19.869 "name": null, 00:18:19.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.869 "is_configured": false, 00:18:19.869 "data_offset": 2048, 00:18:19.869 "data_size": 63488 00:18:19.869 }, 00:18:19.869 { 00:18:19.869 "name": null, 00:18:19.869 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:19.869 "is_configured": false, 00:18:19.869 "data_offset": 2048, 00:18:19.869 "data_size": 63488 00:18:19.869 }, 00:18:19.869 { 00:18:19.869 "name": null, 00:18:19.869 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:19.869 "is_configured": false, 00:18:19.869 "data_offset": 2048, 00:18:19.869 "data_size": 63488 00:18:19.869 } 00:18:19.869 ] 00:18:19.869 }' 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.869 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.129 [2024-12-05 20:11:21.476964] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:20.129 [2024-12-05 20:11:21.477033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.129 [2024-12-05 20:11:21.477052] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:20.129 [2024-12-05 20:11:21.477063] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.129 [2024-12-05 20:11:21.477499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.129 [2024-12-05 20:11:21.477519] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:20.129 [2024-12-05 20:11:21.477599] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:20.129 [2024-12-05 20:11:21.477621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.129 pt2 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.129 [2024-12-05 20:11:21.484966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.129 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.129 "name": "raid_bdev1", 00:18:20.129 "uuid": "38c37107-94ff-4730-8885-ebe376c38df8", 00:18:20.129 "strip_size_kb": 64, 00:18:20.129 "state": "configuring", 00:18:20.129 "raid_level": "raid5f", 00:18:20.130 "superblock": true, 00:18:20.130 "num_base_bdevs": 4, 00:18:20.130 "num_base_bdevs_discovered": 1, 00:18:20.130 "num_base_bdevs_operational": 4, 00:18:20.130 "base_bdevs_list": [ 00:18:20.130 { 00:18:20.130 "name": "pt1", 00:18:20.130 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:20.130 "is_configured": true, 00:18:20.130 "data_offset": 2048, 00:18:20.130 "data_size": 63488 00:18:20.130 }, 00:18:20.130 { 00:18:20.130 "name": null, 00:18:20.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.130 "is_configured": false, 00:18:20.130 "data_offset": 0, 00:18:20.130 "data_size": 63488 00:18:20.130 }, 00:18:20.130 { 00:18:20.130 "name": null, 00:18:20.130 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:20.130 "is_configured": false, 00:18:20.130 "data_offset": 2048, 00:18:20.130 "data_size": 63488 00:18:20.130 }, 00:18:20.130 { 00:18:20.130 "name": null, 00:18:20.130 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:20.130 "is_configured": false, 00:18:20.130 "data_offset": 2048, 00:18:20.130 "data_size": 63488 00:18:20.130 } 00:18:20.130 ] 00:18:20.130 }' 00:18:20.130 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.130 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.700 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:20.700 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:20.700 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:20.700 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.700 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.700 [2024-12-05 20:11:21.900262] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:20.700 [2024-12-05 20:11:21.900377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.700 [2024-12-05 20:11:21.900421] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:20.700 [2024-12-05 20:11:21.900450] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.700 [2024-12-05 20:11:21.900978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.700 [2024-12-05 20:11:21.901035] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:20.700 [2024-12-05 20:11:21.901152] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:20.700 [2024-12-05 20:11:21.901200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.700 pt2 00:18:20.700 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.700 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:20.700 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:20.700 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:20.700 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.700 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.700 [2024-12-05 20:11:21.912211] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:20.700 [2024-12-05 20:11:21.912292] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.700 [2024-12-05 20:11:21.912338] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:20.700 [2024-12-05 20:11:21.912368] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.700 [2024-12-05 20:11:21.912767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.700 [2024-12-05 20:11:21.912819] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:20.700 [2024-12-05 20:11:21.912922] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:20.700 [2024-12-05 20:11:21.912978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:20.700 pt3 00:18:20.700 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.700 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:20.700 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:20.700 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:20.700 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.700 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.700 [2024-12-05 20:11:21.924170] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:20.700 [2024-12-05 20:11:21.924239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.700 [2024-12-05 20:11:21.924292] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:20.700 [2024-12-05 20:11:21.924318] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.700 [2024-12-05 20:11:21.924702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.700 [2024-12-05 20:11:21.924755] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:20.700 [2024-12-05 20:11:21.924836] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:20.700 [2024-12-05 20:11:21.924891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:20.700 [2024-12-05 20:11:21.925046] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:20.700 [2024-12-05 20:11:21.925081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:20.701 [2024-12-05 20:11:21.925328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:20.701 [2024-12-05 20:11:21.932212] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:20.701 [2024-12-05 20:11:21.932243] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:20.701 [2024-12-05 20:11:21.932412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.701 pt4 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.701 "name": "raid_bdev1", 00:18:20.701 "uuid": "38c37107-94ff-4730-8885-ebe376c38df8", 00:18:20.701 "strip_size_kb": 64, 00:18:20.701 "state": "online", 00:18:20.701 "raid_level": "raid5f", 00:18:20.701 "superblock": true, 00:18:20.701 "num_base_bdevs": 4, 00:18:20.701 "num_base_bdevs_discovered": 4, 00:18:20.701 "num_base_bdevs_operational": 4, 00:18:20.701 "base_bdevs_list": [ 00:18:20.701 { 00:18:20.701 "name": "pt1", 00:18:20.701 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:20.701 "is_configured": true, 00:18:20.701 "data_offset": 2048, 00:18:20.701 "data_size": 63488 00:18:20.701 }, 00:18:20.701 { 00:18:20.701 "name": "pt2", 00:18:20.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.701 "is_configured": true, 00:18:20.701 "data_offset": 2048, 00:18:20.701 "data_size": 63488 00:18:20.701 }, 00:18:20.701 { 00:18:20.701 "name": "pt3", 00:18:20.701 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:20.701 "is_configured": true, 00:18:20.701 "data_offset": 2048, 00:18:20.701 "data_size": 63488 00:18:20.701 }, 00:18:20.701 { 00:18:20.701 "name": "pt4", 00:18:20.701 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:20.701 "is_configured": true, 00:18:20.701 "data_offset": 2048, 00:18:20.701 "data_size": 63488 00:18:20.701 } 00:18:20.701 ] 00:18:20.701 }' 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.701 20:11:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.961 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:20.961 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:20.961 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:20.961 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:20.961 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:20.961 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:20.961 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:20.961 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:20.961 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.961 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.961 [2024-12-05 20:11:22.360208] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.961 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.225 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:21.225 "name": "raid_bdev1", 00:18:21.225 "aliases": [ 00:18:21.225 "38c37107-94ff-4730-8885-ebe376c38df8" 00:18:21.225 ], 00:18:21.225 "product_name": "Raid Volume", 00:18:21.225 "block_size": 512, 00:18:21.225 "num_blocks": 190464, 00:18:21.225 "uuid": "38c37107-94ff-4730-8885-ebe376c38df8", 00:18:21.225 "assigned_rate_limits": { 00:18:21.225 "rw_ios_per_sec": 0, 00:18:21.225 "rw_mbytes_per_sec": 0, 00:18:21.225 "r_mbytes_per_sec": 0, 00:18:21.225 "w_mbytes_per_sec": 0 00:18:21.225 }, 00:18:21.225 "claimed": false, 00:18:21.225 "zoned": false, 00:18:21.225 "supported_io_types": { 00:18:21.225 "read": true, 00:18:21.225 "write": true, 00:18:21.225 "unmap": false, 00:18:21.225 "flush": false, 00:18:21.226 "reset": true, 00:18:21.226 "nvme_admin": false, 00:18:21.226 "nvme_io": false, 00:18:21.226 "nvme_io_md": false, 00:18:21.226 "write_zeroes": true, 00:18:21.226 "zcopy": false, 00:18:21.226 "get_zone_info": false, 00:18:21.226 "zone_management": false, 00:18:21.226 "zone_append": false, 00:18:21.226 "compare": false, 00:18:21.226 "compare_and_write": false, 00:18:21.226 "abort": false, 00:18:21.226 "seek_hole": false, 00:18:21.226 "seek_data": false, 00:18:21.226 "copy": false, 00:18:21.226 "nvme_iov_md": false 00:18:21.226 }, 00:18:21.226 "driver_specific": { 00:18:21.226 "raid": { 00:18:21.226 "uuid": "38c37107-94ff-4730-8885-ebe376c38df8", 00:18:21.226 "strip_size_kb": 64, 00:18:21.226 "state": "online", 00:18:21.226 "raid_level": "raid5f", 00:18:21.226 "superblock": true, 00:18:21.226 "num_base_bdevs": 4, 00:18:21.226 "num_base_bdevs_discovered": 4, 00:18:21.226 "num_base_bdevs_operational": 4, 00:18:21.226 "base_bdevs_list": [ 00:18:21.226 { 00:18:21.226 "name": "pt1", 00:18:21.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:21.226 "is_configured": true, 00:18:21.226 "data_offset": 2048, 00:18:21.226 "data_size": 63488 00:18:21.226 }, 00:18:21.226 { 00:18:21.226 "name": "pt2", 00:18:21.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.226 "is_configured": true, 00:18:21.226 "data_offset": 2048, 00:18:21.226 "data_size": 63488 00:18:21.226 }, 00:18:21.226 { 00:18:21.226 "name": "pt3", 00:18:21.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:21.226 "is_configured": true, 00:18:21.226 "data_offset": 2048, 00:18:21.226 "data_size": 63488 00:18:21.226 }, 00:18:21.226 { 00:18:21.226 "name": "pt4", 00:18:21.226 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:21.226 "is_configured": true, 00:18:21.226 "data_offset": 2048, 00:18:21.226 "data_size": 63488 00:18:21.226 } 00:18:21.226 ] 00:18:21.226 } 00:18:21.226 } 00:18:21.226 }' 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:21.226 pt2 00:18:21.226 pt3 00:18:21.226 pt4' 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.226 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.521 [2024-12-05 20:11:22.711553] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 38c37107-94ff-4730-8885-ebe376c38df8 '!=' 38c37107-94ff-4730-8885-ebe376c38df8 ']' 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.521 [2024-12-05 20:11:22.759343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.521 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.521 "name": "raid_bdev1", 00:18:21.521 "uuid": "38c37107-94ff-4730-8885-ebe376c38df8", 00:18:21.521 "strip_size_kb": 64, 00:18:21.521 "state": "online", 00:18:21.521 "raid_level": "raid5f", 00:18:21.521 "superblock": true, 00:18:21.521 "num_base_bdevs": 4, 00:18:21.521 "num_base_bdevs_discovered": 3, 00:18:21.521 "num_base_bdevs_operational": 3, 00:18:21.521 "base_bdevs_list": [ 00:18:21.521 { 00:18:21.521 "name": null, 00:18:21.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.521 "is_configured": false, 00:18:21.521 "data_offset": 0, 00:18:21.521 "data_size": 63488 00:18:21.521 }, 00:18:21.521 { 00:18:21.521 "name": "pt2", 00:18:21.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.521 "is_configured": true, 00:18:21.521 "data_offset": 2048, 00:18:21.521 "data_size": 63488 00:18:21.521 }, 00:18:21.521 { 00:18:21.522 "name": "pt3", 00:18:21.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:21.522 "is_configured": true, 00:18:21.522 "data_offset": 2048, 00:18:21.522 "data_size": 63488 00:18:21.522 }, 00:18:21.522 { 00:18:21.522 "name": "pt4", 00:18:21.522 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:21.522 "is_configured": true, 00:18:21.522 "data_offset": 2048, 00:18:21.522 "data_size": 63488 00:18:21.522 } 00:18:21.522 ] 00:18:21.522 }' 00:18:21.522 20:11:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.522 20:11:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.796 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:21.796 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.796 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.796 [2024-12-05 20:11:23.214563] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.796 [2024-12-05 20:11:23.214659] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.796 [2024-12-05 20:11:23.214748] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.796 [2024-12-05 20:11:23.214825] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.796 [2024-12-05 20:11:23.214851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:21.796 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.796 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:21.796 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.796 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.796 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.055 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.055 [2024-12-05 20:11:23.302394] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:22.055 [2024-12-05 20:11:23.302484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.055 [2024-12-05 20:11:23.302507] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:22.056 [2024-12-05 20:11:23.302532] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.056 [2024-12-05 20:11:23.304766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.056 [2024-12-05 20:11:23.304803] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:22.056 [2024-12-05 20:11:23.304904] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:22.056 [2024-12-05 20:11:23.304950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:22.056 pt2 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.056 "name": "raid_bdev1", 00:18:22.056 "uuid": "38c37107-94ff-4730-8885-ebe376c38df8", 00:18:22.056 "strip_size_kb": 64, 00:18:22.056 "state": "configuring", 00:18:22.056 "raid_level": "raid5f", 00:18:22.056 "superblock": true, 00:18:22.056 "num_base_bdevs": 4, 00:18:22.056 "num_base_bdevs_discovered": 1, 00:18:22.056 "num_base_bdevs_operational": 3, 00:18:22.056 "base_bdevs_list": [ 00:18:22.056 { 00:18:22.056 "name": null, 00:18:22.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.056 "is_configured": false, 00:18:22.056 "data_offset": 2048, 00:18:22.056 "data_size": 63488 00:18:22.056 }, 00:18:22.056 { 00:18:22.056 "name": "pt2", 00:18:22.056 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.056 "is_configured": true, 00:18:22.056 "data_offset": 2048, 00:18:22.056 "data_size": 63488 00:18:22.056 }, 00:18:22.056 { 00:18:22.056 "name": null, 00:18:22.056 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:22.056 "is_configured": false, 00:18:22.056 "data_offset": 2048, 00:18:22.056 "data_size": 63488 00:18:22.056 }, 00:18:22.056 { 00:18:22.056 "name": null, 00:18:22.056 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:22.056 "is_configured": false, 00:18:22.056 "data_offset": 2048, 00:18:22.056 "data_size": 63488 00:18:22.056 } 00:18:22.056 ] 00:18:22.056 }' 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.056 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.314 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:22.314 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:22.314 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:22.314 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.314 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.314 [2024-12-05 20:11:23.749686] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:22.314 [2024-12-05 20:11:23.749831] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.314 [2024-12-05 20:11:23.749899] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:22.314 [2024-12-05 20:11:23.749932] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.314 [2024-12-05 20:11:23.750396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.314 [2024-12-05 20:11:23.750452] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:22.314 [2024-12-05 20:11:23.750573] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:22.314 [2024-12-05 20:11:23.750623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:22.573 pt3 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.573 "name": "raid_bdev1", 00:18:22.573 "uuid": "38c37107-94ff-4730-8885-ebe376c38df8", 00:18:22.573 "strip_size_kb": 64, 00:18:22.573 "state": "configuring", 00:18:22.573 "raid_level": "raid5f", 00:18:22.573 "superblock": true, 00:18:22.573 "num_base_bdevs": 4, 00:18:22.573 "num_base_bdevs_discovered": 2, 00:18:22.573 "num_base_bdevs_operational": 3, 00:18:22.573 "base_bdevs_list": [ 00:18:22.573 { 00:18:22.573 "name": null, 00:18:22.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.573 "is_configured": false, 00:18:22.573 "data_offset": 2048, 00:18:22.573 "data_size": 63488 00:18:22.573 }, 00:18:22.573 { 00:18:22.573 "name": "pt2", 00:18:22.573 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.573 "is_configured": true, 00:18:22.573 "data_offset": 2048, 00:18:22.573 "data_size": 63488 00:18:22.573 }, 00:18:22.573 { 00:18:22.573 "name": "pt3", 00:18:22.573 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:22.573 "is_configured": true, 00:18:22.573 "data_offset": 2048, 00:18:22.573 "data_size": 63488 00:18:22.573 }, 00:18:22.573 { 00:18:22.573 "name": null, 00:18:22.573 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:22.573 "is_configured": false, 00:18:22.573 "data_offset": 2048, 00:18:22.573 "data_size": 63488 00:18:22.573 } 00:18:22.573 ] 00:18:22.573 }' 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.573 20:11:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.832 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:22.832 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:22.832 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:22.832 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:22.832 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.832 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.832 [2024-12-05 20:11:24.216894] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:22.832 [2024-12-05 20:11:24.216952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.832 [2024-12-05 20:11:24.216975] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:22.832 [2024-12-05 20:11:24.216984] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.832 [2024-12-05 20:11:24.217440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.832 [2024-12-05 20:11:24.217469] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:22.832 [2024-12-05 20:11:24.217558] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:22.832 [2024-12-05 20:11:24.217585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:22.832 [2024-12-05 20:11:24.217725] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:22.833 [2024-12-05 20:11:24.217738] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:22.833 [2024-12-05 20:11:24.217985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:22.833 [2024-12-05 20:11:24.224961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:22.833 [2024-12-05 20:11:24.224983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:22.833 [2024-12-05 20:11:24.225268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.833 pt4 00:18:22.833 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.833 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:22.833 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.833 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.833 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.833 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.833 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.833 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.833 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.833 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.833 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.833 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.833 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.833 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.833 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.833 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.092 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.092 "name": "raid_bdev1", 00:18:23.092 "uuid": "38c37107-94ff-4730-8885-ebe376c38df8", 00:18:23.092 "strip_size_kb": 64, 00:18:23.092 "state": "online", 00:18:23.092 "raid_level": "raid5f", 00:18:23.092 "superblock": true, 00:18:23.092 "num_base_bdevs": 4, 00:18:23.092 "num_base_bdevs_discovered": 3, 00:18:23.092 "num_base_bdevs_operational": 3, 00:18:23.092 "base_bdevs_list": [ 00:18:23.092 { 00:18:23.092 "name": null, 00:18:23.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.092 "is_configured": false, 00:18:23.092 "data_offset": 2048, 00:18:23.092 "data_size": 63488 00:18:23.092 }, 00:18:23.092 { 00:18:23.092 "name": "pt2", 00:18:23.092 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.092 "is_configured": true, 00:18:23.092 "data_offset": 2048, 00:18:23.092 "data_size": 63488 00:18:23.092 }, 00:18:23.092 { 00:18:23.092 "name": "pt3", 00:18:23.092 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:23.092 "is_configured": true, 00:18:23.092 "data_offset": 2048, 00:18:23.092 "data_size": 63488 00:18:23.092 }, 00:18:23.092 { 00:18:23.092 "name": "pt4", 00:18:23.092 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:23.092 "is_configured": true, 00:18:23.092 "data_offset": 2048, 00:18:23.092 "data_size": 63488 00:18:23.092 } 00:18:23.092 ] 00:18:23.092 }' 00:18:23.092 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.092 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.352 [2024-12-05 20:11:24.605586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:23.352 [2024-12-05 20:11:24.605662] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.352 [2024-12-05 20:11:24.605761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.352 [2024-12-05 20:11:24.605865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.352 [2024-12-05 20:11:24.605943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.352 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.352 [2024-12-05 20:11:24.681439] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:23.352 [2024-12-05 20:11:24.681551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.352 [2024-12-05 20:11:24.681580] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:23.352 [2024-12-05 20:11:24.681594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.352 [2024-12-05 20:11:24.683822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.352 [2024-12-05 20:11:24.683900] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:23.353 [2024-12-05 20:11:24.684010] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:23.353 [2024-12-05 20:11:24.684093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:23.353 [2024-12-05 20:11:24.684250] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:23.353 [2024-12-05 20:11:24.684306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:23.353 [2024-12-05 20:11:24.684372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:23.353 [2024-12-05 20:11:24.684468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:23.353 [2024-12-05 20:11:24.684606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:23.353 pt1 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.353 "name": "raid_bdev1", 00:18:23.353 "uuid": "38c37107-94ff-4730-8885-ebe376c38df8", 00:18:23.353 "strip_size_kb": 64, 00:18:23.353 "state": "configuring", 00:18:23.353 "raid_level": "raid5f", 00:18:23.353 "superblock": true, 00:18:23.353 "num_base_bdevs": 4, 00:18:23.353 "num_base_bdevs_discovered": 2, 00:18:23.353 "num_base_bdevs_operational": 3, 00:18:23.353 "base_bdevs_list": [ 00:18:23.353 { 00:18:23.353 "name": null, 00:18:23.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.353 "is_configured": false, 00:18:23.353 "data_offset": 2048, 00:18:23.353 "data_size": 63488 00:18:23.353 }, 00:18:23.353 { 00:18:23.353 "name": "pt2", 00:18:23.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.353 "is_configured": true, 00:18:23.353 "data_offset": 2048, 00:18:23.353 "data_size": 63488 00:18:23.353 }, 00:18:23.353 { 00:18:23.353 "name": "pt3", 00:18:23.353 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:23.353 "is_configured": true, 00:18:23.353 "data_offset": 2048, 00:18:23.353 "data_size": 63488 00:18:23.353 }, 00:18:23.353 { 00:18:23.353 "name": null, 00:18:23.353 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:23.353 "is_configured": false, 00:18:23.353 "data_offset": 2048, 00:18:23.353 "data_size": 63488 00:18:23.353 } 00:18:23.353 ] 00:18:23.353 }' 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.353 20:11:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.936 [2024-12-05 20:11:25.216762] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:23.936 [2024-12-05 20:11:25.216820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.936 [2024-12-05 20:11:25.216843] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:23.936 [2024-12-05 20:11:25.216852] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.936 [2024-12-05 20:11:25.217297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.936 [2024-12-05 20:11:25.217315] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:23.936 [2024-12-05 20:11:25.217395] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:23.936 [2024-12-05 20:11:25.217414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:23.936 [2024-12-05 20:11:25.217549] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:23.936 [2024-12-05 20:11:25.217558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:23.936 [2024-12-05 20:11:25.217801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:23.936 [2024-12-05 20:11:25.225175] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:23.936 [2024-12-05 20:11:25.225200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:23.936 [2024-12-05 20:11:25.225451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.936 pt4 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.936 "name": "raid_bdev1", 00:18:23.936 "uuid": "38c37107-94ff-4730-8885-ebe376c38df8", 00:18:23.936 "strip_size_kb": 64, 00:18:23.936 "state": "online", 00:18:23.936 "raid_level": "raid5f", 00:18:23.936 "superblock": true, 00:18:23.936 "num_base_bdevs": 4, 00:18:23.936 "num_base_bdevs_discovered": 3, 00:18:23.936 "num_base_bdevs_operational": 3, 00:18:23.936 "base_bdevs_list": [ 00:18:23.936 { 00:18:23.936 "name": null, 00:18:23.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.936 "is_configured": false, 00:18:23.936 "data_offset": 2048, 00:18:23.936 "data_size": 63488 00:18:23.936 }, 00:18:23.936 { 00:18:23.936 "name": "pt2", 00:18:23.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.936 "is_configured": true, 00:18:23.936 "data_offset": 2048, 00:18:23.936 "data_size": 63488 00:18:23.936 }, 00:18:23.936 { 00:18:23.936 "name": "pt3", 00:18:23.936 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:23.936 "is_configured": true, 00:18:23.936 "data_offset": 2048, 00:18:23.936 "data_size": 63488 00:18:23.936 }, 00:18:23.936 { 00:18:23.936 "name": "pt4", 00:18:23.936 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:23.936 "is_configured": true, 00:18:23.936 "data_offset": 2048, 00:18:23.936 "data_size": 63488 00:18:23.936 } 00:18:23.936 ] 00:18:23.936 }' 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.936 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.194 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:24.194 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.194 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.194 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.454 [2024-12-05 20:11:25.677428] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 38c37107-94ff-4730-8885-ebe376c38df8 '!=' 38c37107-94ff-4730-8885-ebe376c38df8 ']' 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84187 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84187 ']' 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84187 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84187 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.454 killing process with pid 84187 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84187' 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84187 00:18:24.454 [2024-12-05 20:11:25.759848] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:24.454 [2024-12-05 20:11:25.759938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.454 [2024-12-05 20:11:25.760036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.454 [2024-12-05 20:11:25.760052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:24.454 20:11:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84187 00:18:24.714 [2024-12-05 20:11:26.130985] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:26.096 20:11:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:26.096 00:18:26.096 real 0m8.328s 00:18:26.096 user 0m13.214s 00:18:26.096 sys 0m1.462s 00:18:26.096 20:11:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.096 20:11:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.096 ************************************ 00:18:26.096 END TEST raid5f_superblock_test 00:18:26.096 ************************************ 00:18:26.096 20:11:27 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:26.096 20:11:27 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:18:26.096 20:11:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:26.096 20:11:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.096 20:11:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.096 ************************************ 00:18:26.096 START TEST raid5f_rebuild_test 00:18:26.096 ************************************ 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84669 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84669 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84669 ']' 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.096 20:11:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.096 [2024-12-05 20:11:27.352393] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:18:26.096 [2024-12-05 20:11:27.352604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:26.096 Zero copy mechanism will not be used. 00:18:26.096 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84669 ] 00:18:26.096 [2024-12-05 20:11:27.526903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.356 [2024-12-05 20:11:27.633918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.616 [2024-12-05 20:11:27.829024] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.616 [2024-12-05 20:11:27.829105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.876 BaseBdev1_malloc 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.876 [2024-12-05 20:11:28.218988] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:26.876 [2024-12-05 20:11:28.219050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.876 [2024-12-05 20:11:28.219072] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:26.876 [2024-12-05 20:11:28.219083] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.876 [2024-12-05 20:11:28.221184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.876 [2024-12-05 20:11:28.221225] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:26.876 BaseBdev1 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.876 BaseBdev2_malloc 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.876 [2024-12-05 20:11:28.271564] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:26.876 [2024-12-05 20:11:28.271636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.876 [2024-12-05 20:11:28.271658] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:26.876 [2024-12-05 20:11:28.271669] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.876 [2024-12-05 20:11:28.273724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.876 [2024-12-05 20:11:28.273761] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:26.876 BaseBdev2 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.876 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.137 BaseBdev3_malloc 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.137 [2024-12-05 20:11:28.359384] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:27.137 [2024-12-05 20:11:28.359434] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.137 [2024-12-05 20:11:28.359470] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:27.137 [2024-12-05 20:11:28.359481] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.137 [2024-12-05 20:11:28.361509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.137 [2024-12-05 20:11:28.361602] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:27.137 BaseBdev3 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.137 BaseBdev4_malloc 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.137 [2024-12-05 20:11:28.410655] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:27.137 [2024-12-05 20:11:28.410763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.137 [2024-12-05 20:11:28.410787] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:27.137 [2024-12-05 20:11:28.410797] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.137 [2024-12-05 20:11:28.412785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.137 [2024-12-05 20:11:28.412826] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:27.137 BaseBdev4 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.137 spare_malloc 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.137 spare_delay 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.137 [2024-12-05 20:11:28.475459] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:27.137 [2024-12-05 20:11:28.475509] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.137 [2024-12-05 20:11:28.475525] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:27.137 [2024-12-05 20:11:28.475535] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.137 [2024-12-05 20:11:28.477577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.137 [2024-12-05 20:11:28.477618] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:27.137 spare 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.137 [2024-12-05 20:11:28.487493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.137 [2024-12-05 20:11:28.489290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:27.137 [2024-12-05 20:11:28.489351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:27.137 [2024-12-05 20:11:28.489402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:27.137 [2024-12-05 20:11:28.489487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:27.137 [2024-12-05 20:11:28.489501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:27.137 [2024-12-05 20:11:28.489749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:27.137 [2024-12-05 20:11:28.496932] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:27.137 [2024-12-05 20:11:28.497003] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:27.137 [2024-12-05 20:11:28.497197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.137 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.138 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.138 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:27.138 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.138 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.138 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.138 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.138 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.138 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.138 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.138 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.138 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.138 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.138 "name": "raid_bdev1", 00:18:27.138 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:27.138 "strip_size_kb": 64, 00:18:27.138 "state": "online", 00:18:27.138 "raid_level": "raid5f", 00:18:27.138 "superblock": false, 00:18:27.138 "num_base_bdevs": 4, 00:18:27.138 "num_base_bdevs_discovered": 4, 00:18:27.138 "num_base_bdevs_operational": 4, 00:18:27.138 "base_bdevs_list": [ 00:18:27.138 { 00:18:27.138 "name": "BaseBdev1", 00:18:27.138 "uuid": "81628a1f-6fe9-5c4b-af9c-24237b2938e1", 00:18:27.138 "is_configured": true, 00:18:27.138 "data_offset": 0, 00:18:27.138 "data_size": 65536 00:18:27.138 }, 00:18:27.138 { 00:18:27.138 "name": "BaseBdev2", 00:18:27.138 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:27.138 "is_configured": true, 00:18:27.138 "data_offset": 0, 00:18:27.138 "data_size": 65536 00:18:27.138 }, 00:18:27.138 { 00:18:27.138 "name": "BaseBdev3", 00:18:27.138 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:27.138 "is_configured": true, 00:18:27.138 "data_offset": 0, 00:18:27.138 "data_size": 65536 00:18:27.138 }, 00:18:27.138 { 00:18:27.138 "name": "BaseBdev4", 00:18:27.138 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:27.138 "is_configured": true, 00:18:27.138 "data_offset": 0, 00:18:27.138 "data_size": 65536 00:18:27.138 } 00:18:27.138 ] 00:18:27.138 }' 00:18:27.138 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.138 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.707 [2024-12-05 20:11:28.925200] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:27.707 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:27.966 [2024-12-05 20:11:29.148868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:27.966 /dev/nbd0 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:27.966 1+0 records in 00:18:27.966 1+0 records out 00:18:27.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256923 s, 15.9 MB/s 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:27.966 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:18:28.535 512+0 records in 00:18:28.535 512+0 records out 00:18:28.535 100663296 bytes (101 MB, 96 MiB) copied, 0.469784 s, 214 MB/s 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:28.535 [2024-12-05 20:11:29.915383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.535 [2024-12-05 20:11:29.929468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:28.535 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.536 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:28.536 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.536 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.536 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.536 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.536 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:28.536 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.536 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.536 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.536 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.536 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.536 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.536 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.536 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.536 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.795 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.795 "name": "raid_bdev1", 00:18:28.795 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:28.795 "strip_size_kb": 64, 00:18:28.795 "state": "online", 00:18:28.795 "raid_level": "raid5f", 00:18:28.795 "superblock": false, 00:18:28.795 "num_base_bdevs": 4, 00:18:28.795 "num_base_bdevs_discovered": 3, 00:18:28.795 "num_base_bdevs_operational": 3, 00:18:28.795 "base_bdevs_list": [ 00:18:28.795 { 00:18:28.795 "name": null, 00:18:28.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.795 "is_configured": false, 00:18:28.795 "data_offset": 0, 00:18:28.795 "data_size": 65536 00:18:28.795 }, 00:18:28.795 { 00:18:28.795 "name": "BaseBdev2", 00:18:28.795 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:28.795 "is_configured": true, 00:18:28.795 "data_offset": 0, 00:18:28.795 "data_size": 65536 00:18:28.795 }, 00:18:28.795 { 00:18:28.795 "name": "BaseBdev3", 00:18:28.795 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:28.795 "is_configured": true, 00:18:28.795 "data_offset": 0, 00:18:28.795 "data_size": 65536 00:18:28.795 }, 00:18:28.795 { 00:18:28.795 "name": "BaseBdev4", 00:18:28.795 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:28.795 "is_configured": true, 00:18:28.795 "data_offset": 0, 00:18:28.795 "data_size": 65536 00:18:28.795 } 00:18:28.795 ] 00:18:28.795 }' 00:18:28.795 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.795 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.055 20:11:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:29.055 20:11:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.055 20:11:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.055 [2024-12-05 20:11:30.380770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:29.055 [2024-12-05 20:11:30.395918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:29.055 20:11:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.055 20:11:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:29.055 [2024-12-05 20:11:30.405152] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:29.992 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.992 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.992 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:29.992 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:29.992 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.992 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.992 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.992 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.992 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.251 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.251 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.251 "name": "raid_bdev1", 00:18:30.251 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:30.251 "strip_size_kb": 64, 00:18:30.251 "state": "online", 00:18:30.251 "raid_level": "raid5f", 00:18:30.251 "superblock": false, 00:18:30.251 "num_base_bdevs": 4, 00:18:30.251 "num_base_bdevs_discovered": 4, 00:18:30.251 "num_base_bdevs_operational": 4, 00:18:30.251 "process": { 00:18:30.251 "type": "rebuild", 00:18:30.251 "target": "spare", 00:18:30.252 "progress": { 00:18:30.252 "blocks": 19200, 00:18:30.252 "percent": 9 00:18:30.252 } 00:18:30.252 }, 00:18:30.252 "base_bdevs_list": [ 00:18:30.252 { 00:18:30.252 "name": "spare", 00:18:30.252 "uuid": "dcbbfe43-736f-589e-ba48-3ffe4b1ade6f", 00:18:30.252 "is_configured": true, 00:18:30.252 "data_offset": 0, 00:18:30.252 "data_size": 65536 00:18:30.252 }, 00:18:30.252 { 00:18:30.252 "name": "BaseBdev2", 00:18:30.252 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:30.252 "is_configured": true, 00:18:30.252 "data_offset": 0, 00:18:30.252 "data_size": 65536 00:18:30.252 }, 00:18:30.252 { 00:18:30.252 "name": "BaseBdev3", 00:18:30.252 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:30.252 "is_configured": true, 00:18:30.252 "data_offset": 0, 00:18:30.252 "data_size": 65536 00:18:30.252 }, 00:18:30.252 { 00:18:30.252 "name": "BaseBdev4", 00:18:30.252 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:30.252 "is_configured": true, 00:18:30.252 "data_offset": 0, 00:18:30.252 "data_size": 65536 00:18:30.252 } 00:18:30.252 ] 00:18:30.252 }' 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.252 [2024-12-05 20:11:31.563981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.252 [2024-12-05 20:11:31.611349] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:30.252 [2024-12-05 20:11:31.611408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.252 [2024-12-05 20:11:31.611424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.252 [2024-12-05 20:11:31.611434] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.252 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.511 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.511 "name": "raid_bdev1", 00:18:30.511 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:30.511 "strip_size_kb": 64, 00:18:30.511 "state": "online", 00:18:30.511 "raid_level": "raid5f", 00:18:30.511 "superblock": false, 00:18:30.511 "num_base_bdevs": 4, 00:18:30.511 "num_base_bdevs_discovered": 3, 00:18:30.511 "num_base_bdevs_operational": 3, 00:18:30.511 "base_bdevs_list": [ 00:18:30.511 { 00:18:30.511 "name": null, 00:18:30.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.511 "is_configured": false, 00:18:30.511 "data_offset": 0, 00:18:30.511 "data_size": 65536 00:18:30.511 }, 00:18:30.511 { 00:18:30.511 "name": "BaseBdev2", 00:18:30.511 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:30.511 "is_configured": true, 00:18:30.511 "data_offset": 0, 00:18:30.511 "data_size": 65536 00:18:30.511 }, 00:18:30.511 { 00:18:30.511 "name": "BaseBdev3", 00:18:30.511 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:30.511 "is_configured": true, 00:18:30.511 "data_offset": 0, 00:18:30.511 "data_size": 65536 00:18:30.511 }, 00:18:30.511 { 00:18:30.511 "name": "BaseBdev4", 00:18:30.511 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:30.511 "is_configured": true, 00:18:30.511 "data_offset": 0, 00:18:30.511 "data_size": 65536 00:18:30.511 } 00:18:30.511 ] 00:18:30.511 }' 00:18:30.511 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.511 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.769 "name": "raid_bdev1", 00:18:30.769 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:30.769 "strip_size_kb": 64, 00:18:30.769 "state": "online", 00:18:30.769 "raid_level": "raid5f", 00:18:30.769 "superblock": false, 00:18:30.769 "num_base_bdevs": 4, 00:18:30.769 "num_base_bdevs_discovered": 3, 00:18:30.769 "num_base_bdevs_operational": 3, 00:18:30.769 "base_bdevs_list": [ 00:18:30.769 { 00:18:30.769 "name": null, 00:18:30.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.769 "is_configured": false, 00:18:30.769 "data_offset": 0, 00:18:30.769 "data_size": 65536 00:18:30.769 }, 00:18:30.769 { 00:18:30.769 "name": "BaseBdev2", 00:18:30.769 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:30.769 "is_configured": true, 00:18:30.769 "data_offset": 0, 00:18:30.769 "data_size": 65536 00:18:30.769 }, 00:18:30.769 { 00:18:30.769 "name": "BaseBdev3", 00:18:30.769 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:30.769 "is_configured": true, 00:18:30.769 "data_offset": 0, 00:18:30.769 "data_size": 65536 00:18:30.769 }, 00:18:30.769 { 00:18:30.769 "name": "BaseBdev4", 00:18:30.769 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:30.769 "is_configured": true, 00:18:30.769 "data_offset": 0, 00:18:30.769 "data_size": 65536 00:18:30.769 } 00:18:30.769 ] 00:18:30.769 }' 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.769 [2024-12-05 20:11:32.183218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:30.769 [2024-12-05 20:11:32.197647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.769 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:31.028 [2024-12-05 20:11:32.207183] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:31.972 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.972 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.972 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.972 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.972 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.972 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.972 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.972 20:11:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.972 20:11:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.972 20:11:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.972 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.972 "name": "raid_bdev1", 00:18:31.972 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:31.972 "strip_size_kb": 64, 00:18:31.973 "state": "online", 00:18:31.973 "raid_level": "raid5f", 00:18:31.973 "superblock": false, 00:18:31.973 "num_base_bdevs": 4, 00:18:31.973 "num_base_bdevs_discovered": 4, 00:18:31.973 "num_base_bdevs_operational": 4, 00:18:31.973 "process": { 00:18:31.973 "type": "rebuild", 00:18:31.973 "target": "spare", 00:18:31.973 "progress": { 00:18:31.973 "blocks": 19200, 00:18:31.973 "percent": 9 00:18:31.973 } 00:18:31.973 }, 00:18:31.973 "base_bdevs_list": [ 00:18:31.973 { 00:18:31.973 "name": "spare", 00:18:31.973 "uuid": "dcbbfe43-736f-589e-ba48-3ffe4b1ade6f", 00:18:31.973 "is_configured": true, 00:18:31.973 "data_offset": 0, 00:18:31.973 "data_size": 65536 00:18:31.973 }, 00:18:31.973 { 00:18:31.973 "name": "BaseBdev2", 00:18:31.973 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:31.973 "is_configured": true, 00:18:31.973 "data_offset": 0, 00:18:31.973 "data_size": 65536 00:18:31.973 }, 00:18:31.973 { 00:18:31.973 "name": "BaseBdev3", 00:18:31.973 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:31.973 "is_configured": true, 00:18:31.973 "data_offset": 0, 00:18:31.973 "data_size": 65536 00:18:31.973 }, 00:18:31.973 { 00:18:31.973 "name": "BaseBdev4", 00:18:31.973 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:31.973 "is_configured": true, 00:18:31.973 "data_offset": 0, 00:18:31.973 "data_size": 65536 00:18:31.973 } 00:18:31.973 ] 00:18:31.973 }' 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=615 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.973 "name": "raid_bdev1", 00:18:31.973 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:31.973 "strip_size_kb": 64, 00:18:31.973 "state": "online", 00:18:31.973 "raid_level": "raid5f", 00:18:31.973 "superblock": false, 00:18:31.973 "num_base_bdevs": 4, 00:18:31.973 "num_base_bdevs_discovered": 4, 00:18:31.973 "num_base_bdevs_operational": 4, 00:18:31.973 "process": { 00:18:31.973 "type": "rebuild", 00:18:31.973 "target": "spare", 00:18:31.973 "progress": { 00:18:31.973 "blocks": 21120, 00:18:31.973 "percent": 10 00:18:31.973 } 00:18:31.973 }, 00:18:31.973 "base_bdevs_list": [ 00:18:31.973 { 00:18:31.973 "name": "spare", 00:18:31.973 "uuid": "dcbbfe43-736f-589e-ba48-3ffe4b1ade6f", 00:18:31.973 "is_configured": true, 00:18:31.973 "data_offset": 0, 00:18:31.973 "data_size": 65536 00:18:31.973 }, 00:18:31.973 { 00:18:31.973 "name": "BaseBdev2", 00:18:31.973 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:31.973 "is_configured": true, 00:18:31.973 "data_offset": 0, 00:18:31.973 "data_size": 65536 00:18:31.973 }, 00:18:31.973 { 00:18:31.973 "name": "BaseBdev3", 00:18:31.973 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:31.973 "is_configured": true, 00:18:31.973 "data_offset": 0, 00:18:31.973 "data_size": 65536 00:18:31.973 }, 00:18:31.973 { 00:18:31.973 "name": "BaseBdev4", 00:18:31.973 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:31.973 "is_configured": true, 00:18:31.973 "data_offset": 0, 00:18:31.973 "data_size": 65536 00:18:31.973 } 00:18:31.973 ] 00:18:31.973 }' 00:18:31.973 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.232 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.232 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.232 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.232 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:33.171 20:11:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:33.171 20:11:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.171 20:11:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.171 20:11:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.171 20:11:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.171 20:11:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.171 20:11:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.171 20:11:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.171 20:11:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.171 20:11:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.171 20:11:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.171 20:11:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.171 "name": "raid_bdev1", 00:18:33.171 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:33.171 "strip_size_kb": 64, 00:18:33.171 "state": "online", 00:18:33.171 "raid_level": "raid5f", 00:18:33.171 "superblock": false, 00:18:33.171 "num_base_bdevs": 4, 00:18:33.171 "num_base_bdevs_discovered": 4, 00:18:33.171 "num_base_bdevs_operational": 4, 00:18:33.171 "process": { 00:18:33.171 "type": "rebuild", 00:18:33.171 "target": "spare", 00:18:33.171 "progress": { 00:18:33.171 "blocks": 42240, 00:18:33.171 "percent": 21 00:18:33.171 } 00:18:33.171 }, 00:18:33.171 "base_bdevs_list": [ 00:18:33.171 { 00:18:33.171 "name": "spare", 00:18:33.171 "uuid": "dcbbfe43-736f-589e-ba48-3ffe4b1ade6f", 00:18:33.171 "is_configured": true, 00:18:33.171 "data_offset": 0, 00:18:33.171 "data_size": 65536 00:18:33.171 }, 00:18:33.171 { 00:18:33.171 "name": "BaseBdev2", 00:18:33.171 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:33.171 "is_configured": true, 00:18:33.171 "data_offset": 0, 00:18:33.171 "data_size": 65536 00:18:33.171 }, 00:18:33.171 { 00:18:33.171 "name": "BaseBdev3", 00:18:33.171 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:33.171 "is_configured": true, 00:18:33.171 "data_offset": 0, 00:18:33.171 "data_size": 65536 00:18:33.171 }, 00:18:33.171 { 00:18:33.171 "name": "BaseBdev4", 00:18:33.171 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:33.171 "is_configured": true, 00:18:33.171 "data_offset": 0, 00:18:33.171 "data_size": 65536 00:18:33.171 } 00:18:33.171 ] 00:18:33.171 }' 00:18:33.171 20:11:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.171 20:11:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.171 20:11:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.430 20:11:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.430 20:11:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:34.367 20:11:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:34.367 20:11:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.367 20:11:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.367 20:11:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.367 20:11:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.367 20:11:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.367 20:11:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.367 20:11:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.367 20:11:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.367 20:11:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.367 20:11:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.367 20:11:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.367 "name": "raid_bdev1", 00:18:34.367 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:34.367 "strip_size_kb": 64, 00:18:34.367 "state": "online", 00:18:34.367 "raid_level": "raid5f", 00:18:34.367 "superblock": false, 00:18:34.367 "num_base_bdevs": 4, 00:18:34.367 "num_base_bdevs_discovered": 4, 00:18:34.367 "num_base_bdevs_operational": 4, 00:18:34.367 "process": { 00:18:34.367 "type": "rebuild", 00:18:34.367 "target": "spare", 00:18:34.367 "progress": { 00:18:34.367 "blocks": 65280, 00:18:34.367 "percent": 33 00:18:34.367 } 00:18:34.367 }, 00:18:34.367 "base_bdevs_list": [ 00:18:34.367 { 00:18:34.367 "name": "spare", 00:18:34.367 "uuid": "dcbbfe43-736f-589e-ba48-3ffe4b1ade6f", 00:18:34.367 "is_configured": true, 00:18:34.367 "data_offset": 0, 00:18:34.367 "data_size": 65536 00:18:34.367 }, 00:18:34.367 { 00:18:34.367 "name": "BaseBdev2", 00:18:34.367 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:34.367 "is_configured": true, 00:18:34.367 "data_offset": 0, 00:18:34.367 "data_size": 65536 00:18:34.367 }, 00:18:34.367 { 00:18:34.367 "name": "BaseBdev3", 00:18:34.367 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:34.367 "is_configured": true, 00:18:34.367 "data_offset": 0, 00:18:34.367 "data_size": 65536 00:18:34.367 }, 00:18:34.367 { 00:18:34.367 "name": "BaseBdev4", 00:18:34.367 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:34.367 "is_configured": true, 00:18:34.367 "data_offset": 0, 00:18:34.367 "data_size": 65536 00:18:34.367 } 00:18:34.367 ] 00:18:34.367 }' 00:18:34.367 20:11:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.367 20:11:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:34.367 20:11:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.367 20:11:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.367 20:11:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:35.761 20:11:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:35.762 20:11:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.762 20:11:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.762 20:11:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.762 20:11:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.762 20:11:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.762 20:11:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.762 20:11:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.762 20:11:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.762 20:11:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.762 20:11:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.762 20:11:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.762 "name": "raid_bdev1", 00:18:35.762 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:35.762 "strip_size_kb": 64, 00:18:35.762 "state": "online", 00:18:35.762 "raid_level": "raid5f", 00:18:35.762 "superblock": false, 00:18:35.762 "num_base_bdevs": 4, 00:18:35.762 "num_base_bdevs_discovered": 4, 00:18:35.762 "num_base_bdevs_operational": 4, 00:18:35.762 "process": { 00:18:35.762 "type": "rebuild", 00:18:35.762 "target": "spare", 00:18:35.762 "progress": { 00:18:35.762 "blocks": 86400, 00:18:35.762 "percent": 43 00:18:35.762 } 00:18:35.762 }, 00:18:35.762 "base_bdevs_list": [ 00:18:35.762 { 00:18:35.762 "name": "spare", 00:18:35.762 "uuid": "dcbbfe43-736f-589e-ba48-3ffe4b1ade6f", 00:18:35.762 "is_configured": true, 00:18:35.762 "data_offset": 0, 00:18:35.762 "data_size": 65536 00:18:35.762 }, 00:18:35.762 { 00:18:35.762 "name": "BaseBdev2", 00:18:35.762 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:35.762 "is_configured": true, 00:18:35.762 "data_offset": 0, 00:18:35.762 "data_size": 65536 00:18:35.762 }, 00:18:35.762 { 00:18:35.762 "name": "BaseBdev3", 00:18:35.762 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:35.762 "is_configured": true, 00:18:35.762 "data_offset": 0, 00:18:35.762 "data_size": 65536 00:18:35.762 }, 00:18:35.762 { 00:18:35.762 "name": "BaseBdev4", 00:18:35.762 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:35.762 "is_configured": true, 00:18:35.762 "data_offset": 0, 00:18:35.762 "data_size": 65536 00:18:35.762 } 00:18:35.762 ] 00:18:35.762 }' 00:18:35.762 20:11:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.762 20:11:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.762 20:11:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.762 20:11:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.762 20:11:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:36.699 20:11:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:36.699 20:11:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.699 20:11:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.699 20:11:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.699 20:11:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.699 20:11:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.699 20:11:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.699 20:11:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.699 20:11:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.699 20:11:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.699 20:11:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.699 20:11:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.699 "name": "raid_bdev1", 00:18:36.699 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:36.699 "strip_size_kb": 64, 00:18:36.699 "state": "online", 00:18:36.699 "raid_level": "raid5f", 00:18:36.699 "superblock": false, 00:18:36.699 "num_base_bdevs": 4, 00:18:36.699 "num_base_bdevs_discovered": 4, 00:18:36.699 "num_base_bdevs_operational": 4, 00:18:36.699 "process": { 00:18:36.699 "type": "rebuild", 00:18:36.699 "target": "spare", 00:18:36.699 "progress": { 00:18:36.699 "blocks": 109440, 00:18:36.699 "percent": 55 00:18:36.699 } 00:18:36.699 }, 00:18:36.699 "base_bdevs_list": [ 00:18:36.699 { 00:18:36.699 "name": "spare", 00:18:36.699 "uuid": "dcbbfe43-736f-589e-ba48-3ffe4b1ade6f", 00:18:36.699 "is_configured": true, 00:18:36.699 "data_offset": 0, 00:18:36.699 "data_size": 65536 00:18:36.699 }, 00:18:36.699 { 00:18:36.699 "name": "BaseBdev2", 00:18:36.699 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:36.699 "is_configured": true, 00:18:36.699 "data_offset": 0, 00:18:36.699 "data_size": 65536 00:18:36.699 }, 00:18:36.699 { 00:18:36.699 "name": "BaseBdev3", 00:18:36.699 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:36.699 "is_configured": true, 00:18:36.699 "data_offset": 0, 00:18:36.699 "data_size": 65536 00:18:36.699 }, 00:18:36.699 { 00:18:36.699 "name": "BaseBdev4", 00:18:36.699 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:36.699 "is_configured": true, 00:18:36.699 "data_offset": 0, 00:18:36.700 "data_size": 65536 00:18:36.700 } 00:18:36.700 ] 00:18:36.700 }' 00:18:36.700 20:11:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.700 20:11:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.700 20:11:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.700 20:11:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.700 20:11:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:38.079 20:11:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:38.079 20:11:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.079 20:11:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.079 20:11:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.079 20:11:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.079 20:11:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.079 20:11:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.079 20:11:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.079 20:11:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.079 20:11:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.079 20:11:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.079 20:11:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.079 "name": "raid_bdev1", 00:18:38.079 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:38.079 "strip_size_kb": 64, 00:18:38.079 "state": "online", 00:18:38.079 "raid_level": "raid5f", 00:18:38.079 "superblock": false, 00:18:38.079 "num_base_bdevs": 4, 00:18:38.079 "num_base_bdevs_discovered": 4, 00:18:38.079 "num_base_bdevs_operational": 4, 00:18:38.079 "process": { 00:18:38.079 "type": "rebuild", 00:18:38.079 "target": "spare", 00:18:38.079 "progress": { 00:18:38.079 "blocks": 130560, 00:18:38.079 "percent": 66 00:18:38.079 } 00:18:38.079 }, 00:18:38.079 "base_bdevs_list": [ 00:18:38.079 { 00:18:38.079 "name": "spare", 00:18:38.079 "uuid": "dcbbfe43-736f-589e-ba48-3ffe4b1ade6f", 00:18:38.079 "is_configured": true, 00:18:38.079 "data_offset": 0, 00:18:38.079 "data_size": 65536 00:18:38.079 }, 00:18:38.079 { 00:18:38.079 "name": "BaseBdev2", 00:18:38.079 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:38.079 "is_configured": true, 00:18:38.079 "data_offset": 0, 00:18:38.079 "data_size": 65536 00:18:38.079 }, 00:18:38.079 { 00:18:38.079 "name": "BaseBdev3", 00:18:38.079 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:38.079 "is_configured": true, 00:18:38.079 "data_offset": 0, 00:18:38.079 "data_size": 65536 00:18:38.079 }, 00:18:38.079 { 00:18:38.079 "name": "BaseBdev4", 00:18:38.079 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:38.079 "is_configured": true, 00:18:38.079 "data_offset": 0, 00:18:38.079 "data_size": 65536 00:18:38.079 } 00:18:38.079 ] 00:18:38.079 }' 00:18:38.079 20:11:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.079 20:11:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.079 20:11:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.079 20:11:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.079 20:11:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:39.016 20:11:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:39.016 20:11:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.016 20:11:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.016 20:11:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.017 20:11:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.017 20:11:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.017 20:11:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.017 20:11:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.017 20:11:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.017 20:11:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.017 20:11:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.017 20:11:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.017 "name": "raid_bdev1", 00:18:39.017 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:39.017 "strip_size_kb": 64, 00:18:39.017 "state": "online", 00:18:39.017 "raid_level": "raid5f", 00:18:39.017 "superblock": false, 00:18:39.017 "num_base_bdevs": 4, 00:18:39.017 "num_base_bdevs_discovered": 4, 00:18:39.017 "num_base_bdevs_operational": 4, 00:18:39.017 "process": { 00:18:39.017 "type": "rebuild", 00:18:39.017 "target": "spare", 00:18:39.017 "progress": { 00:18:39.017 "blocks": 153600, 00:18:39.017 "percent": 78 00:18:39.017 } 00:18:39.017 }, 00:18:39.017 "base_bdevs_list": [ 00:18:39.017 { 00:18:39.017 "name": "spare", 00:18:39.017 "uuid": "dcbbfe43-736f-589e-ba48-3ffe4b1ade6f", 00:18:39.017 "is_configured": true, 00:18:39.017 "data_offset": 0, 00:18:39.017 "data_size": 65536 00:18:39.017 }, 00:18:39.017 { 00:18:39.017 "name": "BaseBdev2", 00:18:39.017 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:39.017 "is_configured": true, 00:18:39.017 "data_offset": 0, 00:18:39.017 "data_size": 65536 00:18:39.017 }, 00:18:39.017 { 00:18:39.017 "name": "BaseBdev3", 00:18:39.017 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:39.017 "is_configured": true, 00:18:39.017 "data_offset": 0, 00:18:39.017 "data_size": 65536 00:18:39.017 }, 00:18:39.017 { 00:18:39.017 "name": "BaseBdev4", 00:18:39.017 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:39.017 "is_configured": true, 00:18:39.017 "data_offset": 0, 00:18:39.017 "data_size": 65536 00:18:39.017 } 00:18:39.017 ] 00:18:39.017 }' 00:18:39.017 20:11:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.017 20:11:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.017 20:11:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.017 20:11:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.017 20:11:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:40.450 20:11:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:40.450 20:11:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.450 20:11:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.450 20:11:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.450 20:11:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.450 20:11:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.450 20:11:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.450 20:11:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.450 20:11:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.450 20:11:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.450 20:11:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.450 20:11:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.450 "name": "raid_bdev1", 00:18:40.450 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:40.450 "strip_size_kb": 64, 00:18:40.450 "state": "online", 00:18:40.450 "raid_level": "raid5f", 00:18:40.450 "superblock": false, 00:18:40.450 "num_base_bdevs": 4, 00:18:40.450 "num_base_bdevs_discovered": 4, 00:18:40.450 "num_base_bdevs_operational": 4, 00:18:40.450 "process": { 00:18:40.450 "type": "rebuild", 00:18:40.450 "target": "spare", 00:18:40.450 "progress": { 00:18:40.450 "blocks": 174720, 00:18:40.450 "percent": 88 00:18:40.450 } 00:18:40.450 }, 00:18:40.450 "base_bdevs_list": [ 00:18:40.450 { 00:18:40.450 "name": "spare", 00:18:40.450 "uuid": "dcbbfe43-736f-589e-ba48-3ffe4b1ade6f", 00:18:40.450 "is_configured": true, 00:18:40.450 "data_offset": 0, 00:18:40.450 "data_size": 65536 00:18:40.450 }, 00:18:40.450 { 00:18:40.450 "name": "BaseBdev2", 00:18:40.450 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:40.450 "is_configured": true, 00:18:40.450 "data_offset": 0, 00:18:40.450 "data_size": 65536 00:18:40.450 }, 00:18:40.450 { 00:18:40.450 "name": "BaseBdev3", 00:18:40.450 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:40.450 "is_configured": true, 00:18:40.450 "data_offset": 0, 00:18:40.450 "data_size": 65536 00:18:40.450 }, 00:18:40.450 { 00:18:40.450 "name": "BaseBdev4", 00:18:40.450 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:40.450 "is_configured": true, 00:18:40.450 "data_offset": 0, 00:18:40.450 "data_size": 65536 00:18:40.450 } 00:18:40.450 ] 00:18:40.450 }' 00:18:40.450 20:11:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.450 20:11:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.450 20:11:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.450 20:11:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.450 20:11:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:41.390 20:11:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.390 20:11:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.390 20:11:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.390 20:11:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.390 20:11:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.390 20:11:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.390 20:11:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.391 20:11:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.391 20:11:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.391 20:11:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.391 [2024-12-05 20:11:42.560329] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:41.391 20:11:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.391 [2024-12-05 20:11:42.560511] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:41.391 [2024-12-05 20:11:42.560579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.391 20:11:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.391 "name": "raid_bdev1", 00:18:41.391 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:41.391 "strip_size_kb": 64, 00:18:41.391 "state": "online", 00:18:41.391 "raid_level": "raid5f", 00:18:41.391 "superblock": false, 00:18:41.391 "num_base_bdevs": 4, 00:18:41.391 "num_base_bdevs_discovered": 4, 00:18:41.391 "num_base_bdevs_operational": 4, 00:18:41.391 "process": { 00:18:41.391 "type": "rebuild", 00:18:41.391 "target": "spare", 00:18:41.391 "progress": { 00:18:41.391 "blocks": 195840, 00:18:41.391 "percent": 99 00:18:41.391 } 00:18:41.391 }, 00:18:41.391 "base_bdevs_list": [ 00:18:41.391 { 00:18:41.391 "name": "spare", 00:18:41.391 "uuid": "dcbbfe43-736f-589e-ba48-3ffe4b1ade6f", 00:18:41.391 "is_configured": true, 00:18:41.391 "data_offset": 0, 00:18:41.391 "data_size": 65536 00:18:41.391 }, 00:18:41.391 { 00:18:41.391 "name": "BaseBdev2", 00:18:41.391 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:41.391 "is_configured": true, 00:18:41.391 "data_offset": 0, 00:18:41.391 "data_size": 65536 00:18:41.391 }, 00:18:41.391 { 00:18:41.391 "name": "BaseBdev3", 00:18:41.391 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:41.391 "is_configured": true, 00:18:41.391 "data_offset": 0, 00:18:41.391 "data_size": 65536 00:18:41.391 }, 00:18:41.391 { 00:18:41.391 "name": "BaseBdev4", 00:18:41.391 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:41.391 "is_configured": true, 00:18:41.391 "data_offset": 0, 00:18:41.391 "data_size": 65536 00:18:41.391 } 00:18:41.391 ] 00:18:41.391 }' 00:18:41.391 20:11:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.391 20:11:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.391 20:11:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.391 20:11:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.391 20:11:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:42.331 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.331 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.331 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.331 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.331 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.331 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.331 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.331 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.331 20:11:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.331 20:11:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.331 20:11:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.331 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.331 "name": "raid_bdev1", 00:18:42.331 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:42.331 "strip_size_kb": 64, 00:18:42.331 "state": "online", 00:18:42.331 "raid_level": "raid5f", 00:18:42.331 "superblock": false, 00:18:42.331 "num_base_bdevs": 4, 00:18:42.331 "num_base_bdevs_discovered": 4, 00:18:42.331 "num_base_bdevs_operational": 4, 00:18:42.331 "base_bdevs_list": [ 00:18:42.331 { 00:18:42.331 "name": "spare", 00:18:42.331 "uuid": "dcbbfe43-736f-589e-ba48-3ffe4b1ade6f", 00:18:42.331 "is_configured": true, 00:18:42.331 "data_offset": 0, 00:18:42.331 "data_size": 65536 00:18:42.331 }, 00:18:42.331 { 00:18:42.331 "name": "BaseBdev2", 00:18:42.331 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:42.331 "is_configured": true, 00:18:42.331 "data_offset": 0, 00:18:42.331 "data_size": 65536 00:18:42.331 }, 00:18:42.331 { 00:18:42.331 "name": "BaseBdev3", 00:18:42.331 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:42.331 "is_configured": true, 00:18:42.331 "data_offset": 0, 00:18:42.331 "data_size": 65536 00:18:42.331 }, 00:18:42.331 { 00:18:42.331 "name": "BaseBdev4", 00:18:42.331 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:42.331 "is_configured": true, 00:18:42.331 "data_offset": 0, 00:18:42.331 "data_size": 65536 00:18:42.331 } 00:18:42.331 ] 00:18:42.331 }' 00:18:42.331 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.331 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:42.331 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.591 "name": "raid_bdev1", 00:18:42.591 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:42.591 "strip_size_kb": 64, 00:18:42.591 "state": "online", 00:18:42.591 "raid_level": "raid5f", 00:18:42.591 "superblock": false, 00:18:42.591 "num_base_bdevs": 4, 00:18:42.591 "num_base_bdevs_discovered": 4, 00:18:42.591 "num_base_bdevs_operational": 4, 00:18:42.591 "base_bdevs_list": [ 00:18:42.591 { 00:18:42.591 "name": "spare", 00:18:42.591 "uuid": "dcbbfe43-736f-589e-ba48-3ffe4b1ade6f", 00:18:42.591 "is_configured": true, 00:18:42.591 "data_offset": 0, 00:18:42.591 "data_size": 65536 00:18:42.591 }, 00:18:42.591 { 00:18:42.591 "name": "BaseBdev2", 00:18:42.591 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:42.591 "is_configured": true, 00:18:42.591 "data_offset": 0, 00:18:42.591 "data_size": 65536 00:18:42.591 }, 00:18:42.591 { 00:18:42.591 "name": "BaseBdev3", 00:18:42.591 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:42.591 "is_configured": true, 00:18:42.591 "data_offset": 0, 00:18:42.591 "data_size": 65536 00:18:42.591 }, 00:18:42.591 { 00:18:42.591 "name": "BaseBdev4", 00:18:42.591 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:42.591 "is_configured": true, 00:18:42.591 "data_offset": 0, 00:18:42.591 "data_size": 65536 00:18:42.591 } 00:18:42.591 ] 00:18:42.591 }' 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.591 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.591 "name": "raid_bdev1", 00:18:42.591 "uuid": "dabaaca4-1992-4d98-be43-0f2b94e65b57", 00:18:42.591 "strip_size_kb": 64, 00:18:42.591 "state": "online", 00:18:42.591 "raid_level": "raid5f", 00:18:42.591 "superblock": false, 00:18:42.591 "num_base_bdevs": 4, 00:18:42.591 "num_base_bdevs_discovered": 4, 00:18:42.591 "num_base_bdevs_operational": 4, 00:18:42.591 "base_bdevs_list": [ 00:18:42.591 { 00:18:42.592 "name": "spare", 00:18:42.592 "uuid": "dcbbfe43-736f-589e-ba48-3ffe4b1ade6f", 00:18:42.592 "is_configured": true, 00:18:42.592 "data_offset": 0, 00:18:42.592 "data_size": 65536 00:18:42.592 }, 00:18:42.592 { 00:18:42.592 "name": "BaseBdev2", 00:18:42.592 "uuid": "35b4f516-7084-5423-b362-1623bba65736", 00:18:42.592 "is_configured": true, 00:18:42.592 "data_offset": 0, 00:18:42.592 "data_size": 65536 00:18:42.592 }, 00:18:42.592 { 00:18:42.592 "name": "BaseBdev3", 00:18:42.592 "uuid": "48b85344-4477-5184-bbd8-2a9358f28489", 00:18:42.592 "is_configured": true, 00:18:42.592 "data_offset": 0, 00:18:42.592 "data_size": 65536 00:18:42.592 }, 00:18:42.592 { 00:18:42.592 "name": "BaseBdev4", 00:18:42.592 "uuid": "fa37a586-54d3-5252-873b-99f5f6822163", 00:18:42.592 "is_configured": true, 00:18:42.592 "data_offset": 0, 00:18:42.592 "data_size": 65536 00:18:42.592 } 00:18:42.592 ] 00:18:42.592 }' 00:18:42.592 20:11:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.592 20:11:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.160 [2024-12-05 20:11:44.363007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.160 [2024-12-05 20:11:44.363124] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.160 [2024-12-05 20:11:44.363235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.160 [2024-12-05 20:11:44.363347] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.160 [2024-12-05 20:11:44.363424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:43.160 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:43.161 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:43.420 /dev/nbd0 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:43.420 1+0 records in 00:18:43.420 1+0 records out 00:18:43.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373242 s, 11.0 MB/s 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:43.420 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:43.679 /dev/nbd1 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:43.679 1+0 records in 00:18:43.679 1+0 records out 00:18:43.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409796 s, 10.0 MB/s 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:43.679 20:11:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:43.679 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:43.679 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:43.679 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:43.679 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:43.679 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:43.679 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:43.679 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:43.938 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:43.938 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:43.938 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:43.938 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:43.938 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:43.938 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:43.938 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:43.938 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:43.938 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:43.938 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84669 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84669 ']' 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84669 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84669 00:18:44.197 killing process with pid 84669 00:18:44.197 Received shutdown signal, test time was about 60.000000 seconds 00:18:44.197 00:18:44.197 Latency(us) 00:18:44.197 [2024-12-05T20:11:45.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.197 [2024-12-05T20:11:45.634Z] =================================================================================================================== 00:18:44.197 [2024-12-05T20:11:45.634Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84669' 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84669 00:18:44.197 [2024-12-05 20:11:45.607455] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:44.197 20:11:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84669 00:18:44.764 [2024-12-05 20:11:46.080904] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:45.701 20:11:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:45.701 00:18:45.701 real 0m19.868s 00:18:45.701 user 0m23.743s 00:18:45.701 sys 0m2.195s 00:18:45.701 20:11:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.701 ************************************ 00:18:45.701 END TEST raid5f_rebuild_test 00:18:45.701 ************************************ 00:18:45.701 20:11:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.960 20:11:47 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:18:45.960 20:11:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:45.960 20:11:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.960 20:11:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:45.960 ************************************ 00:18:45.960 START TEST raid5f_rebuild_test_sb 00:18:45.960 ************************************ 00:18:45.960 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:18:45.960 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:45.960 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:45.960 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:45.960 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:45.960 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:45.960 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:45.960 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:45.960 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:45.960 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85191 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85191 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85191 ']' 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.961 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.961 [2024-12-05 20:11:47.294590] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:18:45.961 [2024-12-05 20:11:47.294820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:45.961 Zero copy mechanism will not be used. 00:18:45.961 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85191 ] 00:18:46.219 [2024-12-05 20:11:47.468769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.219 [2024-12-05 20:11:47.570501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.478 [2024-12-05 20:11:47.743526] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.478 [2024-12-05 20:11:47.743580] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.737 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.737 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:46.737 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:46.737 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:46.737 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.737 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.737 BaseBdev1_malloc 00:18:46.737 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.737 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:46.737 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.737 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.737 [2024-12-05 20:11:48.160566] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:46.737 [2024-12-05 20:11:48.160627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.737 [2024-12-05 20:11:48.160650] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:46.737 [2024-12-05 20:11:48.160661] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.737 [2024-12-05 20:11:48.162687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.737 [2024-12-05 20:11:48.162769] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:46.737 BaseBdev1 00:18:46.737 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.737 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:46.737 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:46.738 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.738 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.996 BaseBdev2_malloc 00:18:46.996 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.996 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:46.996 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.996 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.996 [2024-12-05 20:11:48.214983] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:46.996 [2024-12-05 20:11:48.215039] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.996 [2024-12-05 20:11:48.215060] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:46.996 [2024-12-05 20:11:48.215071] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.996 [2024-12-05 20:11:48.217036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.996 [2024-12-05 20:11:48.217074] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:46.996 BaseBdev2 00:18:46.996 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.997 BaseBdev3_malloc 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.997 [2024-12-05 20:11:48.281969] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:46.997 [2024-12-05 20:11:48.282070] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.997 [2024-12-05 20:11:48.282109] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:46.997 [2024-12-05 20:11:48.282151] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.997 [2024-12-05 20:11:48.284136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.997 [2024-12-05 20:11:48.284218] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:46.997 BaseBdev3 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.997 BaseBdev4_malloc 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.997 [2024-12-05 20:11:48.334187] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:46.997 [2024-12-05 20:11:48.334279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.997 [2024-12-05 20:11:48.334303] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:46.997 [2024-12-05 20:11:48.334313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.997 [2024-12-05 20:11:48.336338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.997 [2024-12-05 20:11:48.336381] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:46.997 BaseBdev4 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.997 spare_malloc 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.997 spare_delay 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.997 [2024-12-05 20:11:48.400124] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:46.997 [2024-12-05 20:11:48.400238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.997 [2024-12-05 20:11:48.400258] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:46.997 [2024-12-05 20:11:48.400267] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.997 [2024-12-05 20:11:48.402275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.997 [2024-12-05 20:11:48.402313] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:46.997 spare 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.997 [2024-12-05 20:11:48.412163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:46.997 [2024-12-05 20:11:48.413880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:46.997 [2024-12-05 20:11:48.413955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:46.997 [2024-12-05 20:11:48.414006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:46.997 [2024-12-05 20:11:48.414186] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:46.997 [2024-12-05 20:11:48.414200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:46.997 [2024-12-05 20:11:48.414432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:46.997 [2024-12-05 20:11:48.421394] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:46.997 [2024-12-05 20:11:48.421416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:46.997 [2024-12-05 20:11:48.421591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.997 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.255 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.255 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.255 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.256 "name": "raid_bdev1", 00:18:47.256 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:18:47.256 "strip_size_kb": 64, 00:18:47.256 "state": "online", 00:18:47.256 "raid_level": "raid5f", 00:18:47.256 "superblock": true, 00:18:47.256 "num_base_bdevs": 4, 00:18:47.256 "num_base_bdevs_discovered": 4, 00:18:47.256 "num_base_bdevs_operational": 4, 00:18:47.256 "base_bdevs_list": [ 00:18:47.256 { 00:18:47.256 "name": "BaseBdev1", 00:18:47.256 "uuid": "bf89f8c3-ee79-50c7-add3-e08ec72ba8c1", 00:18:47.256 "is_configured": true, 00:18:47.256 "data_offset": 2048, 00:18:47.256 "data_size": 63488 00:18:47.256 }, 00:18:47.256 { 00:18:47.256 "name": "BaseBdev2", 00:18:47.256 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:18:47.256 "is_configured": true, 00:18:47.256 "data_offset": 2048, 00:18:47.256 "data_size": 63488 00:18:47.256 }, 00:18:47.256 { 00:18:47.256 "name": "BaseBdev3", 00:18:47.256 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:18:47.256 "is_configured": true, 00:18:47.256 "data_offset": 2048, 00:18:47.256 "data_size": 63488 00:18:47.256 }, 00:18:47.256 { 00:18:47.256 "name": "BaseBdev4", 00:18:47.256 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:18:47.256 "is_configured": true, 00:18:47.256 "data_offset": 2048, 00:18:47.256 "data_size": 63488 00:18:47.256 } 00:18:47.256 ] 00:18:47.256 }' 00:18:47.256 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.256 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.515 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:47.515 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.515 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.515 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:47.515 [2024-12-05 20:11:48.901150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.515 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.515 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:18:47.515 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:47.515 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.515 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.515 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.774 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.774 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:47.774 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:47.774 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:47.774 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:47.774 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:47.774 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:47.774 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:47.774 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:47.774 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:47.774 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:47.774 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:47.774 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:47.774 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:47.774 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:47.774 [2024-12-05 20:11:49.152571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:47.774 /dev/nbd0 00:18:47.774 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.034 1+0 records in 00:18:48.034 1+0 records out 00:18:48.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472152 s, 8.7 MB/s 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:48.034 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:18:48.293 496+0 records in 00:18:48.293 496+0 records out 00:18:48.293 97517568 bytes (98 MB, 93 MiB) copied, 0.44468 s, 219 MB/s 00:18:48.293 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:48.293 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.293 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:48.293 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:48.293 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:48.293 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.293 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:48.552 [2024-12-05 20:11:49.893863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.552 [2024-12-05 20:11:49.911608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.552 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.552 "name": "raid_bdev1", 00:18:48.552 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:18:48.552 "strip_size_kb": 64, 00:18:48.552 "state": "online", 00:18:48.552 "raid_level": "raid5f", 00:18:48.552 "superblock": true, 00:18:48.552 "num_base_bdevs": 4, 00:18:48.552 "num_base_bdevs_discovered": 3, 00:18:48.552 "num_base_bdevs_operational": 3, 00:18:48.552 "base_bdevs_list": [ 00:18:48.552 { 00:18:48.552 "name": null, 00:18:48.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.552 "is_configured": false, 00:18:48.552 "data_offset": 0, 00:18:48.552 "data_size": 63488 00:18:48.552 }, 00:18:48.552 { 00:18:48.552 "name": "BaseBdev2", 00:18:48.552 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:18:48.552 "is_configured": true, 00:18:48.552 "data_offset": 2048, 00:18:48.552 "data_size": 63488 00:18:48.552 }, 00:18:48.552 { 00:18:48.552 "name": "BaseBdev3", 00:18:48.552 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:18:48.552 "is_configured": true, 00:18:48.552 "data_offset": 2048, 00:18:48.552 "data_size": 63488 00:18:48.552 }, 00:18:48.552 { 00:18:48.552 "name": "BaseBdev4", 00:18:48.552 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:18:48.552 "is_configured": true, 00:18:48.553 "data_offset": 2048, 00:18:48.553 "data_size": 63488 00:18:48.553 } 00:18:48.553 ] 00:18:48.553 }' 00:18:48.553 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.553 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.122 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:49.122 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.122 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.122 [2024-12-05 20:11:50.402734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.122 [2024-12-05 20:11:50.417205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:18:49.122 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.122 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:49.122 [2024-12-05 20:11:50.426501] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:50.061 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.061 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.061 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:50.061 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:50.061 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.061 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.061 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.061 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.061 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.061 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.061 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.061 "name": "raid_bdev1", 00:18:50.061 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:18:50.061 "strip_size_kb": 64, 00:18:50.061 "state": "online", 00:18:50.061 "raid_level": "raid5f", 00:18:50.061 "superblock": true, 00:18:50.061 "num_base_bdevs": 4, 00:18:50.061 "num_base_bdevs_discovered": 4, 00:18:50.061 "num_base_bdevs_operational": 4, 00:18:50.061 "process": { 00:18:50.061 "type": "rebuild", 00:18:50.061 "target": "spare", 00:18:50.061 "progress": { 00:18:50.061 "blocks": 19200, 00:18:50.061 "percent": 10 00:18:50.061 } 00:18:50.061 }, 00:18:50.061 "base_bdevs_list": [ 00:18:50.061 { 00:18:50.061 "name": "spare", 00:18:50.061 "uuid": "b8d52ed5-d0d2-52c2-bf4b-c525c004ef63", 00:18:50.061 "is_configured": true, 00:18:50.061 "data_offset": 2048, 00:18:50.061 "data_size": 63488 00:18:50.061 }, 00:18:50.061 { 00:18:50.061 "name": "BaseBdev2", 00:18:50.061 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:18:50.061 "is_configured": true, 00:18:50.061 "data_offset": 2048, 00:18:50.061 "data_size": 63488 00:18:50.061 }, 00:18:50.061 { 00:18:50.061 "name": "BaseBdev3", 00:18:50.061 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:18:50.061 "is_configured": true, 00:18:50.061 "data_offset": 2048, 00:18:50.061 "data_size": 63488 00:18:50.061 }, 00:18:50.061 { 00:18:50.061 "name": "BaseBdev4", 00:18:50.061 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:18:50.061 "is_configured": true, 00:18:50.061 "data_offset": 2048, 00:18:50.061 "data_size": 63488 00:18:50.061 } 00:18:50.061 ] 00:18:50.061 }' 00:18:50.061 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.321 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.321 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.321 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.321 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:50.321 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.321 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.321 [2024-12-05 20:11:51.581313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.321 [2024-12-05 20:11:51.632335] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:50.321 [2024-12-05 20:11:51.632460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.321 [2024-12-05 20:11:51.632498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.321 [2024-12-05 20:11:51.632524] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:50.321 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.321 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:50.321 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.321 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.322 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:50.322 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.322 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:50.322 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.322 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.322 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.322 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.322 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.322 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.322 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.322 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.322 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.322 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.322 "name": "raid_bdev1", 00:18:50.322 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:18:50.322 "strip_size_kb": 64, 00:18:50.322 "state": "online", 00:18:50.322 "raid_level": "raid5f", 00:18:50.322 "superblock": true, 00:18:50.322 "num_base_bdevs": 4, 00:18:50.322 "num_base_bdevs_discovered": 3, 00:18:50.322 "num_base_bdevs_operational": 3, 00:18:50.322 "base_bdevs_list": [ 00:18:50.322 { 00:18:50.322 "name": null, 00:18:50.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.322 "is_configured": false, 00:18:50.322 "data_offset": 0, 00:18:50.322 "data_size": 63488 00:18:50.322 }, 00:18:50.322 { 00:18:50.322 "name": "BaseBdev2", 00:18:50.322 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:18:50.322 "is_configured": true, 00:18:50.322 "data_offset": 2048, 00:18:50.322 "data_size": 63488 00:18:50.322 }, 00:18:50.322 { 00:18:50.322 "name": "BaseBdev3", 00:18:50.322 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:18:50.322 "is_configured": true, 00:18:50.322 "data_offset": 2048, 00:18:50.322 "data_size": 63488 00:18:50.322 }, 00:18:50.322 { 00:18:50.322 "name": "BaseBdev4", 00:18:50.322 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:18:50.322 "is_configured": true, 00:18:50.322 "data_offset": 2048, 00:18:50.322 "data_size": 63488 00:18:50.322 } 00:18:50.322 ] 00:18:50.322 }' 00:18:50.322 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.322 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.892 "name": "raid_bdev1", 00:18:50.892 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:18:50.892 "strip_size_kb": 64, 00:18:50.892 "state": "online", 00:18:50.892 "raid_level": "raid5f", 00:18:50.892 "superblock": true, 00:18:50.892 "num_base_bdevs": 4, 00:18:50.892 "num_base_bdevs_discovered": 3, 00:18:50.892 "num_base_bdevs_operational": 3, 00:18:50.892 "base_bdevs_list": [ 00:18:50.892 { 00:18:50.892 "name": null, 00:18:50.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.892 "is_configured": false, 00:18:50.892 "data_offset": 0, 00:18:50.892 "data_size": 63488 00:18:50.892 }, 00:18:50.892 { 00:18:50.892 "name": "BaseBdev2", 00:18:50.892 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:18:50.892 "is_configured": true, 00:18:50.892 "data_offset": 2048, 00:18:50.892 "data_size": 63488 00:18:50.892 }, 00:18:50.892 { 00:18:50.892 "name": "BaseBdev3", 00:18:50.892 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:18:50.892 "is_configured": true, 00:18:50.892 "data_offset": 2048, 00:18:50.892 "data_size": 63488 00:18:50.892 }, 00:18:50.892 { 00:18:50.892 "name": "BaseBdev4", 00:18:50.892 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:18:50.892 "is_configured": true, 00:18:50.892 "data_offset": 2048, 00:18:50.892 "data_size": 63488 00:18:50.892 } 00:18:50.892 ] 00:18:50.892 }' 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.892 [2024-12-05 20:11:52.225405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.892 [2024-12-05 20:11:52.239504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.892 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:50.892 [2024-12-05 20:11:52.248283] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.830 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.830 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.830 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.830 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.830 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.830 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.830 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.830 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.831 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.090 "name": "raid_bdev1", 00:18:52.090 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:18:52.090 "strip_size_kb": 64, 00:18:52.090 "state": "online", 00:18:52.090 "raid_level": "raid5f", 00:18:52.090 "superblock": true, 00:18:52.090 "num_base_bdevs": 4, 00:18:52.090 "num_base_bdevs_discovered": 4, 00:18:52.090 "num_base_bdevs_operational": 4, 00:18:52.090 "process": { 00:18:52.090 "type": "rebuild", 00:18:52.090 "target": "spare", 00:18:52.090 "progress": { 00:18:52.090 "blocks": 19200, 00:18:52.090 "percent": 10 00:18:52.090 } 00:18:52.090 }, 00:18:52.090 "base_bdevs_list": [ 00:18:52.090 { 00:18:52.090 "name": "spare", 00:18:52.090 "uuid": "b8d52ed5-d0d2-52c2-bf4b-c525c004ef63", 00:18:52.090 "is_configured": true, 00:18:52.090 "data_offset": 2048, 00:18:52.090 "data_size": 63488 00:18:52.090 }, 00:18:52.090 { 00:18:52.090 "name": "BaseBdev2", 00:18:52.090 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:18:52.090 "is_configured": true, 00:18:52.090 "data_offset": 2048, 00:18:52.090 "data_size": 63488 00:18:52.090 }, 00:18:52.090 { 00:18:52.090 "name": "BaseBdev3", 00:18:52.090 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:18:52.090 "is_configured": true, 00:18:52.090 "data_offset": 2048, 00:18:52.090 "data_size": 63488 00:18:52.090 }, 00:18:52.090 { 00:18:52.090 "name": "BaseBdev4", 00:18:52.090 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:18:52.090 "is_configured": true, 00:18:52.090 "data_offset": 2048, 00:18:52.090 "data_size": 63488 00:18:52.090 } 00:18:52.090 ] 00:18:52.090 }' 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:52.090 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=635 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.090 "name": "raid_bdev1", 00:18:52.090 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:18:52.090 "strip_size_kb": 64, 00:18:52.090 "state": "online", 00:18:52.090 "raid_level": "raid5f", 00:18:52.090 "superblock": true, 00:18:52.090 "num_base_bdevs": 4, 00:18:52.090 "num_base_bdevs_discovered": 4, 00:18:52.090 "num_base_bdevs_operational": 4, 00:18:52.090 "process": { 00:18:52.090 "type": "rebuild", 00:18:52.090 "target": "spare", 00:18:52.090 "progress": { 00:18:52.090 "blocks": 21120, 00:18:52.090 "percent": 11 00:18:52.090 } 00:18:52.090 }, 00:18:52.090 "base_bdevs_list": [ 00:18:52.090 { 00:18:52.090 "name": "spare", 00:18:52.090 "uuid": "b8d52ed5-d0d2-52c2-bf4b-c525c004ef63", 00:18:52.090 "is_configured": true, 00:18:52.090 "data_offset": 2048, 00:18:52.090 "data_size": 63488 00:18:52.090 }, 00:18:52.090 { 00:18:52.090 "name": "BaseBdev2", 00:18:52.090 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:18:52.090 "is_configured": true, 00:18:52.090 "data_offset": 2048, 00:18:52.090 "data_size": 63488 00:18:52.090 }, 00:18:52.090 { 00:18:52.090 "name": "BaseBdev3", 00:18:52.090 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:18:52.090 "is_configured": true, 00:18:52.090 "data_offset": 2048, 00:18:52.090 "data_size": 63488 00:18:52.090 }, 00:18:52.090 { 00:18:52.090 "name": "BaseBdev4", 00:18:52.090 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:18:52.090 "is_configured": true, 00:18:52.090 "data_offset": 2048, 00:18:52.090 "data_size": 63488 00:18:52.090 } 00:18:52.090 ] 00:18:52.090 }' 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.090 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.349 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.349 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:53.286 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:53.286 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.286 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.286 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.286 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.286 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.286 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.286 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.286 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.286 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.286 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.286 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.286 "name": "raid_bdev1", 00:18:53.286 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:18:53.286 "strip_size_kb": 64, 00:18:53.286 "state": "online", 00:18:53.286 "raid_level": "raid5f", 00:18:53.286 "superblock": true, 00:18:53.286 "num_base_bdevs": 4, 00:18:53.286 "num_base_bdevs_discovered": 4, 00:18:53.286 "num_base_bdevs_operational": 4, 00:18:53.286 "process": { 00:18:53.286 "type": "rebuild", 00:18:53.286 "target": "spare", 00:18:53.286 "progress": { 00:18:53.286 "blocks": 44160, 00:18:53.286 "percent": 23 00:18:53.286 } 00:18:53.286 }, 00:18:53.286 "base_bdevs_list": [ 00:18:53.286 { 00:18:53.286 "name": "spare", 00:18:53.286 "uuid": "b8d52ed5-d0d2-52c2-bf4b-c525c004ef63", 00:18:53.286 "is_configured": true, 00:18:53.286 "data_offset": 2048, 00:18:53.286 "data_size": 63488 00:18:53.286 }, 00:18:53.286 { 00:18:53.286 "name": "BaseBdev2", 00:18:53.286 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:18:53.287 "is_configured": true, 00:18:53.287 "data_offset": 2048, 00:18:53.287 "data_size": 63488 00:18:53.287 }, 00:18:53.287 { 00:18:53.287 "name": "BaseBdev3", 00:18:53.287 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:18:53.287 "is_configured": true, 00:18:53.287 "data_offset": 2048, 00:18:53.287 "data_size": 63488 00:18:53.287 }, 00:18:53.287 { 00:18:53.287 "name": "BaseBdev4", 00:18:53.287 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:18:53.287 "is_configured": true, 00:18:53.287 "data_offset": 2048, 00:18:53.287 "data_size": 63488 00:18:53.287 } 00:18:53.287 ] 00:18:53.287 }' 00:18:53.287 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.287 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.287 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.287 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.287 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:54.665 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:54.665 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.665 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.665 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.665 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.665 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.665 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.665 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.665 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.665 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.665 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.665 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.665 "name": "raid_bdev1", 00:18:54.665 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:18:54.665 "strip_size_kb": 64, 00:18:54.665 "state": "online", 00:18:54.665 "raid_level": "raid5f", 00:18:54.665 "superblock": true, 00:18:54.665 "num_base_bdevs": 4, 00:18:54.665 "num_base_bdevs_discovered": 4, 00:18:54.665 "num_base_bdevs_operational": 4, 00:18:54.665 "process": { 00:18:54.665 "type": "rebuild", 00:18:54.665 "target": "spare", 00:18:54.665 "progress": { 00:18:54.665 "blocks": 65280, 00:18:54.665 "percent": 34 00:18:54.665 } 00:18:54.665 }, 00:18:54.665 "base_bdevs_list": [ 00:18:54.665 { 00:18:54.665 "name": "spare", 00:18:54.665 "uuid": "b8d52ed5-d0d2-52c2-bf4b-c525c004ef63", 00:18:54.665 "is_configured": true, 00:18:54.665 "data_offset": 2048, 00:18:54.665 "data_size": 63488 00:18:54.665 }, 00:18:54.665 { 00:18:54.665 "name": "BaseBdev2", 00:18:54.665 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:18:54.665 "is_configured": true, 00:18:54.665 "data_offset": 2048, 00:18:54.665 "data_size": 63488 00:18:54.665 }, 00:18:54.665 { 00:18:54.665 "name": "BaseBdev3", 00:18:54.665 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:18:54.665 "is_configured": true, 00:18:54.665 "data_offset": 2048, 00:18:54.665 "data_size": 63488 00:18:54.665 }, 00:18:54.665 { 00:18:54.665 "name": "BaseBdev4", 00:18:54.665 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:18:54.665 "is_configured": true, 00:18:54.665 "data_offset": 2048, 00:18:54.665 "data_size": 63488 00:18:54.665 } 00:18:54.665 ] 00:18:54.665 }' 00:18:54.665 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.665 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:54.665 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.665 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.665 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:55.603 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:55.603 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.603 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.603 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:55.603 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:55.603 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.603 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.603 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.603 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.603 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.603 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.603 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.603 "name": "raid_bdev1", 00:18:55.603 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:18:55.603 "strip_size_kb": 64, 00:18:55.603 "state": "online", 00:18:55.603 "raid_level": "raid5f", 00:18:55.603 "superblock": true, 00:18:55.603 "num_base_bdevs": 4, 00:18:55.603 "num_base_bdevs_discovered": 4, 00:18:55.603 "num_base_bdevs_operational": 4, 00:18:55.603 "process": { 00:18:55.603 "type": "rebuild", 00:18:55.603 "target": "spare", 00:18:55.603 "progress": { 00:18:55.603 "blocks": 86400, 00:18:55.603 "percent": 45 00:18:55.603 } 00:18:55.603 }, 00:18:55.603 "base_bdevs_list": [ 00:18:55.603 { 00:18:55.603 "name": "spare", 00:18:55.603 "uuid": "b8d52ed5-d0d2-52c2-bf4b-c525c004ef63", 00:18:55.603 "is_configured": true, 00:18:55.603 "data_offset": 2048, 00:18:55.603 "data_size": 63488 00:18:55.603 }, 00:18:55.603 { 00:18:55.603 "name": "BaseBdev2", 00:18:55.603 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:18:55.603 "is_configured": true, 00:18:55.603 "data_offset": 2048, 00:18:55.603 "data_size": 63488 00:18:55.603 }, 00:18:55.603 { 00:18:55.603 "name": "BaseBdev3", 00:18:55.603 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:18:55.603 "is_configured": true, 00:18:55.603 "data_offset": 2048, 00:18:55.603 "data_size": 63488 00:18:55.603 }, 00:18:55.603 { 00:18:55.603 "name": "BaseBdev4", 00:18:55.603 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:18:55.603 "is_configured": true, 00:18:55.603 "data_offset": 2048, 00:18:55.603 "data_size": 63488 00:18:55.603 } 00:18:55.603 ] 00:18:55.603 }' 00:18:55.603 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.603 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.603 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.603 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.603 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:56.569 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:56.569 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.569 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.569 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.569 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.569 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.569 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.569 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.569 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.569 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.569 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.827 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.827 "name": "raid_bdev1", 00:18:56.827 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:18:56.827 "strip_size_kb": 64, 00:18:56.827 "state": "online", 00:18:56.827 "raid_level": "raid5f", 00:18:56.827 "superblock": true, 00:18:56.827 "num_base_bdevs": 4, 00:18:56.827 "num_base_bdevs_discovered": 4, 00:18:56.827 "num_base_bdevs_operational": 4, 00:18:56.827 "process": { 00:18:56.827 "type": "rebuild", 00:18:56.827 "target": "spare", 00:18:56.827 "progress": { 00:18:56.827 "blocks": 109440, 00:18:56.827 "percent": 57 00:18:56.827 } 00:18:56.827 }, 00:18:56.827 "base_bdevs_list": [ 00:18:56.827 { 00:18:56.827 "name": "spare", 00:18:56.827 "uuid": "b8d52ed5-d0d2-52c2-bf4b-c525c004ef63", 00:18:56.827 "is_configured": true, 00:18:56.827 "data_offset": 2048, 00:18:56.827 "data_size": 63488 00:18:56.827 }, 00:18:56.827 { 00:18:56.827 "name": "BaseBdev2", 00:18:56.827 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:18:56.827 "is_configured": true, 00:18:56.827 "data_offset": 2048, 00:18:56.827 "data_size": 63488 00:18:56.827 }, 00:18:56.827 { 00:18:56.827 "name": "BaseBdev3", 00:18:56.827 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:18:56.827 "is_configured": true, 00:18:56.827 "data_offset": 2048, 00:18:56.827 "data_size": 63488 00:18:56.827 }, 00:18:56.827 { 00:18:56.827 "name": "BaseBdev4", 00:18:56.827 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:18:56.827 "is_configured": true, 00:18:56.827 "data_offset": 2048, 00:18:56.827 "data_size": 63488 00:18:56.827 } 00:18:56.827 ] 00:18:56.827 }' 00:18:56.827 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.827 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:56.827 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.827 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:56.827 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:57.766 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:57.766 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.766 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.766 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.766 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.766 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.766 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.766 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.766 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.766 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.766 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.766 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.766 "name": "raid_bdev1", 00:18:57.766 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:18:57.766 "strip_size_kb": 64, 00:18:57.766 "state": "online", 00:18:57.766 "raid_level": "raid5f", 00:18:57.766 "superblock": true, 00:18:57.766 "num_base_bdevs": 4, 00:18:57.766 "num_base_bdevs_discovered": 4, 00:18:57.766 "num_base_bdevs_operational": 4, 00:18:57.766 "process": { 00:18:57.766 "type": "rebuild", 00:18:57.766 "target": "spare", 00:18:57.766 "progress": { 00:18:57.766 "blocks": 130560, 00:18:57.766 "percent": 68 00:18:57.766 } 00:18:57.766 }, 00:18:57.766 "base_bdevs_list": [ 00:18:57.766 { 00:18:57.766 "name": "spare", 00:18:57.766 "uuid": "b8d52ed5-d0d2-52c2-bf4b-c525c004ef63", 00:18:57.766 "is_configured": true, 00:18:57.766 "data_offset": 2048, 00:18:57.766 "data_size": 63488 00:18:57.766 }, 00:18:57.766 { 00:18:57.766 "name": "BaseBdev2", 00:18:57.766 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:18:57.766 "is_configured": true, 00:18:57.766 "data_offset": 2048, 00:18:57.766 "data_size": 63488 00:18:57.766 }, 00:18:57.766 { 00:18:57.766 "name": "BaseBdev3", 00:18:57.766 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:18:57.766 "is_configured": true, 00:18:57.766 "data_offset": 2048, 00:18:57.766 "data_size": 63488 00:18:57.766 }, 00:18:57.766 { 00:18:57.766 "name": "BaseBdev4", 00:18:57.766 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:18:57.766 "is_configured": true, 00:18:57.766 "data_offset": 2048, 00:18:57.766 "data_size": 63488 00:18:57.766 } 00:18:57.766 ] 00:18:57.766 }' 00:18:57.766 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.025 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:58.026 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.026 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:58.026 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:58.964 20:12:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:58.964 20:12:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.964 20:12:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.964 20:12:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:58.964 20:12:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:58.964 20:12:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.964 20:12:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.964 20:12:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.964 20:12:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.964 20:12:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.964 20:12:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.964 20:12:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.964 "name": "raid_bdev1", 00:18:58.964 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:18:58.964 "strip_size_kb": 64, 00:18:58.964 "state": "online", 00:18:58.964 "raid_level": "raid5f", 00:18:58.964 "superblock": true, 00:18:58.964 "num_base_bdevs": 4, 00:18:58.964 "num_base_bdevs_discovered": 4, 00:18:58.964 "num_base_bdevs_operational": 4, 00:18:58.964 "process": { 00:18:58.964 "type": "rebuild", 00:18:58.964 "target": "spare", 00:18:58.964 "progress": { 00:18:58.964 "blocks": 153600, 00:18:58.964 "percent": 80 00:18:58.964 } 00:18:58.964 }, 00:18:58.964 "base_bdevs_list": [ 00:18:58.964 { 00:18:58.964 "name": "spare", 00:18:58.964 "uuid": "b8d52ed5-d0d2-52c2-bf4b-c525c004ef63", 00:18:58.964 "is_configured": true, 00:18:58.964 "data_offset": 2048, 00:18:58.964 "data_size": 63488 00:18:58.964 }, 00:18:58.964 { 00:18:58.964 "name": "BaseBdev2", 00:18:58.964 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:18:58.964 "is_configured": true, 00:18:58.964 "data_offset": 2048, 00:18:58.964 "data_size": 63488 00:18:58.964 }, 00:18:58.964 { 00:18:58.964 "name": "BaseBdev3", 00:18:58.964 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:18:58.964 "is_configured": true, 00:18:58.964 "data_offset": 2048, 00:18:58.964 "data_size": 63488 00:18:58.964 }, 00:18:58.964 { 00:18:58.964 "name": "BaseBdev4", 00:18:58.964 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:18:58.964 "is_configured": true, 00:18:58.964 "data_offset": 2048, 00:18:58.964 "data_size": 63488 00:18:58.964 } 00:18:58.964 ] 00:18:58.964 }' 00:18:58.964 20:12:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.964 20:12:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:58.964 20:12:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.224 20:12:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.224 20:12:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:00.164 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:00.164 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:00.164 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.164 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:00.164 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:00.164 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.164 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.164 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.164 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.164 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.164 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.164 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.164 "name": "raid_bdev1", 00:19:00.164 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:19:00.164 "strip_size_kb": 64, 00:19:00.164 "state": "online", 00:19:00.164 "raid_level": "raid5f", 00:19:00.164 "superblock": true, 00:19:00.164 "num_base_bdevs": 4, 00:19:00.164 "num_base_bdevs_discovered": 4, 00:19:00.164 "num_base_bdevs_operational": 4, 00:19:00.164 "process": { 00:19:00.164 "type": "rebuild", 00:19:00.164 "target": "spare", 00:19:00.164 "progress": { 00:19:00.164 "blocks": 174720, 00:19:00.164 "percent": 91 00:19:00.164 } 00:19:00.164 }, 00:19:00.164 "base_bdevs_list": [ 00:19:00.164 { 00:19:00.164 "name": "spare", 00:19:00.164 "uuid": "b8d52ed5-d0d2-52c2-bf4b-c525c004ef63", 00:19:00.164 "is_configured": true, 00:19:00.164 "data_offset": 2048, 00:19:00.164 "data_size": 63488 00:19:00.164 }, 00:19:00.164 { 00:19:00.164 "name": "BaseBdev2", 00:19:00.164 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:19:00.164 "is_configured": true, 00:19:00.164 "data_offset": 2048, 00:19:00.164 "data_size": 63488 00:19:00.164 }, 00:19:00.164 { 00:19:00.164 "name": "BaseBdev3", 00:19:00.164 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:19:00.164 "is_configured": true, 00:19:00.164 "data_offset": 2048, 00:19:00.164 "data_size": 63488 00:19:00.164 }, 00:19:00.164 { 00:19:00.164 "name": "BaseBdev4", 00:19:00.164 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:19:00.164 "is_configured": true, 00:19:00.164 "data_offset": 2048, 00:19:00.164 "data_size": 63488 00:19:00.164 } 00:19:00.164 ] 00:19:00.164 }' 00:19:00.164 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.164 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:00.164 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.164 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:00.164 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:01.102 [2024-12-05 20:12:02.290758] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:01.102 [2024-12-05 20:12:02.290871] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:01.102 [2024-12-05 20:12:02.291037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.361 "name": "raid_bdev1", 00:19:01.361 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:19:01.361 "strip_size_kb": 64, 00:19:01.361 "state": "online", 00:19:01.361 "raid_level": "raid5f", 00:19:01.361 "superblock": true, 00:19:01.361 "num_base_bdevs": 4, 00:19:01.361 "num_base_bdevs_discovered": 4, 00:19:01.361 "num_base_bdevs_operational": 4, 00:19:01.361 "base_bdevs_list": [ 00:19:01.361 { 00:19:01.361 "name": "spare", 00:19:01.361 "uuid": "b8d52ed5-d0d2-52c2-bf4b-c525c004ef63", 00:19:01.361 "is_configured": true, 00:19:01.361 "data_offset": 2048, 00:19:01.361 "data_size": 63488 00:19:01.361 }, 00:19:01.361 { 00:19:01.361 "name": "BaseBdev2", 00:19:01.361 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:19:01.361 "is_configured": true, 00:19:01.361 "data_offset": 2048, 00:19:01.361 "data_size": 63488 00:19:01.361 }, 00:19:01.361 { 00:19:01.361 "name": "BaseBdev3", 00:19:01.361 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:19:01.361 "is_configured": true, 00:19:01.361 "data_offset": 2048, 00:19:01.361 "data_size": 63488 00:19:01.361 }, 00:19:01.361 { 00:19:01.361 "name": "BaseBdev4", 00:19:01.361 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:19:01.361 "is_configured": true, 00:19:01.361 "data_offset": 2048, 00:19:01.361 "data_size": 63488 00:19:01.361 } 00:19:01.361 ] 00:19:01.361 }' 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.361 "name": "raid_bdev1", 00:19:01.361 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:19:01.361 "strip_size_kb": 64, 00:19:01.361 "state": "online", 00:19:01.361 "raid_level": "raid5f", 00:19:01.361 "superblock": true, 00:19:01.361 "num_base_bdevs": 4, 00:19:01.361 "num_base_bdevs_discovered": 4, 00:19:01.361 "num_base_bdevs_operational": 4, 00:19:01.361 "base_bdevs_list": [ 00:19:01.361 { 00:19:01.361 "name": "spare", 00:19:01.361 "uuid": "b8d52ed5-d0d2-52c2-bf4b-c525c004ef63", 00:19:01.361 "is_configured": true, 00:19:01.361 "data_offset": 2048, 00:19:01.361 "data_size": 63488 00:19:01.361 }, 00:19:01.361 { 00:19:01.361 "name": "BaseBdev2", 00:19:01.361 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:19:01.361 "is_configured": true, 00:19:01.361 "data_offset": 2048, 00:19:01.361 "data_size": 63488 00:19:01.361 }, 00:19:01.361 { 00:19:01.361 "name": "BaseBdev3", 00:19:01.361 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:19:01.361 "is_configured": true, 00:19:01.361 "data_offset": 2048, 00:19:01.361 "data_size": 63488 00:19:01.361 }, 00:19:01.361 { 00:19:01.361 "name": "BaseBdev4", 00:19:01.361 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:19:01.361 "is_configured": true, 00:19:01.361 "data_offset": 2048, 00:19:01.361 "data_size": 63488 00:19:01.361 } 00:19:01.361 ] 00:19:01.361 }' 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:01.361 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.621 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.621 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:01.621 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.621 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.621 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:01.621 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.621 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:01.621 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.621 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.621 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.621 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.621 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.621 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.621 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.622 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.622 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.622 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.622 "name": "raid_bdev1", 00:19:01.622 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:19:01.622 "strip_size_kb": 64, 00:19:01.622 "state": "online", 00:19:01.622 "raid_level": "raid5f", 00:19:01.622 "superblock": true, 00:19:01.622 "num_base_bdevs": 4, 00:19:01.622 "num_base_bdevs_discovered": 4, 00:19:01.622 "num_base_bdevs_operational": 4, 00:19:01.622 "base_bdevs_list": [ 00:19:01.622 { 00:19:01.622 "name": "spare", 00:19:01.622 "uuid": "b8d52ed5-d0d2-52c2-bf4b-c525c004ef63", 00:19:01.622 "is_configured": true, 00:19:01.622 "data_offset": 2048, 00:19:01.622 "data_size": 63488 00:19:01.622 }, 00:19:01.622 { 00:19:01.622 "name": "BaseBdev2", 00:19:01.622 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:19:01.622 "is_configured": true, 00:19:01.622 "data_offset": 2048, 00:19:01.622 "data_size": 63488 00:19:01.622 }, 00:19:01.622 { 00:19:01.622 "name": "BaseBdev3", 00:19:01.622 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:19:01.622 "is_configured": true, 00:19:01.622 "data_offset": 2048, 00:19:01.622 "data_size": 63488 00:19:01.622 }, 00:19:01.622 { 00:19:01.622 "name": "BaseBdev4", 00:19:01.622 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:19:01.622 "is_configured": true, 00:19:01.622 "data_offset": 2048, 00:19:01.622 "data_size": 63488 00:19:01.622 } 00:19:01.622 ] 00:19:01.622 }' 00:19:01.622 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.622 20:12:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.881 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:01.881 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.881 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.881 [2024-12-05 20:12:03.278586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:01.881 [2024-12-05 20:12:03.278659] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:01.881 [2024-12-05 20:12:03.278734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.881 [2024-12-05 20:12:03.278837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:01.881 [2024-12-05 20:12:03.278858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:01.881 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.881 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:01.881 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.881 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.882 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.882 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.141 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:02.141 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:02.141 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:02.141 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:02.141 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:02.141 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:02.142 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:02.142 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:02.142 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:02.142 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:02.142 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:02.142 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:02.142 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:02.142 /dev/nbd0 00:19:02.142 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:02.402 1+0 records in 00:19:02.402 1+0 records out 00:19:02.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375442 s, 10.9 MB/s 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:02.402 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:02.402 /dev/nbd1 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:02.662 1+0 records in 00:19:02.662 1+0 records out 00:19:02.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644378 s, 6.4 MB/s 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:02.662 20:12:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:02.662 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:02.662 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:02.662 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:02.662 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:02.662 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:02.662 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:02.662 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:02.921 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:02.921 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:02.921 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:02.921 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:02.921 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:02.921 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:02.921 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:02.921 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:02.921 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:02.921 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:03.180 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:03.180 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:03.180 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:03.180 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:03.180 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:03.180 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:03.180 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:03.180 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:03.180 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:03.180 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:03.180 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.180 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.180 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.180 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:03.180 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.180 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.180 [2024-12-05 20:12:04.495033] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:03.180 [2024-12-05 20:12:04.495087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.180 [2024-12-05 20:12:04.495113] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:03.180 [2024-12-05 20:12:04.495121] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.180 [2024-12-05 20:12:04.497395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.180 [2024-12-05 20:12:04.497436] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:03.180 [2024-12-05 20:12:04.497512] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:03.180 [2024-12-05 20:12:04.497562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:03.180 [2024-12-05 20:12:04.497696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:03.181 [2024-12-05 20:12:04.497784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:03.181 [2024-12-05 20:12:04.497858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:03.181 spare 00:19:03.181 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.181 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:03.181 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.181 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.181 [2024-12-05 20:12:04.597792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:03.181 [2024-12-05 20:12:04.597822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:03.181 [2024-12-05 20:12:04.598079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:19:03.181 [2024-12-05 20:12:04.604771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:03.181 [2024-12-05 20:12:04.604844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:03.181 [2024-12-05 20:12:04.605062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.440 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.440 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:03.440 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.440 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.440 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:03.440 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:03.440 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:03.440 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.440 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.440 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.440 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.440 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.440 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.441 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.441 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.441 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.441 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.441 "name": "raid_bdev1", 00:19:03.441 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:19:03.441 "strip_size_kb": 64, 00:19:03.441 "state": "online", 00:19:03.441 "raid_level": "raid5f", 00:19:03.441 "superblock": true, 00:19:03.441 "num_base_bdevs": 4, 00:19:03.441 "num_base_bdevs_discovered": 4, 00:19:03.441 "num_base_bdevs_operational": 4, 00:19:03.441 "base_bdevs_list": [ 00:19:03.441 { 00:19:03.441 "name": "spare", 00:19:03.441 "uuid": "b8d52ed5-d0d2-52c2-bf4b-c525c004ef63", 00:19:03.441 "is_configured": true, 00:19:03.441 "data_offset": 2048, 00:19:03.441 "data_size": 63488 00:19:03.441 }, 00:19:03.441 { 00:19:03.441 "name": "BaseBdev2", 00:19:03.441 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:19:03.441 "is_configured": true, 00:19:03.441 "data_offset": 2048, 00:19:03.441 "data_size": 63488 00:19:03.441 }, 00:19:03.441 { 00:19:03.441 "name": "BaseBdev3", 00:19:03.441 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:19:03.441 "is_configured": true, 00:19:03.441 "data_offset": 2048, 00:19:03.441 "data_size": 63488 00:19:03.441 }, 00:19:03.441 { 00:19:03.441 "name": "BaseBdev4", 00:19:03.441 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:19:03.441 "is_configured": true, 00:19:03.441 "data_offset": 2048, 00:19:03.441 "data_size": 63488 00:19:03.441 } 00:19:03.441 ] 00:19:03.441 }' 00:19:03.441 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.441 20:12:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.700 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:03.700 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.700 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:03.700 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:03.700 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.700 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.700 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.700 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.700 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.700 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.700 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.700 "name": "raid_bdev1", 00:19:03.700 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:19:03.700 "strip_size_kb": 64, 00:19:03.700 "state": "online", 00:19:03.700 "raid_level": "raid5f", 00:19:03.700 "superblock": true, 00:19:03.700 "num_base_bdevs": 4, 00:19:03.700 "num_base_bdevs_discovered": 4, 00:19:03.700 "num_base_bdevs_operational": 4, 00:19:03.700 "base_bdevs_list": [ 00:19:03.700 { 00:19:03.700 "name": "spare", 00:19:03.700 "uuid": "b8d52ed5-d0d2-52c2-bf4b-c525c004ef63", 00:19:03.700 "is_configured": true, 00:19:03.700 "data_offset": 2048, 00:19:03.700 "data_size": 63488 00:19:03.700 }, 00:19:03.700 { 00:19:03.700 "name": "BaseBdev2", 00:19:03.700 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:19:03.700 "is_configured": true, 00:19:03.700 "data_offset": 2048, 00:19:03.700 "data_size": 63488 00:19:03.700 }, 00:19:03.700 { 00:19:03.700 "name": "BaseBdev3", 00:19:03.700 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:19:03.700 "is_configured": true, 00:19:03.700 "data_offset": 2048, 00:19:03.700 "data_size": 63488 00:19:03.700 }, 00:19:03.700 { 00:19:03.700 "name": "BaseBdev4", 00:19:03.700 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:19:03.700 "is_configured": true, 00:19:03.700 "data_offset": 2048, 00:19:03.700 "data_size": 63488 00:19:03.700 } 00:19:03.700 ] 00:19:03.700 }' 00:19:03.700 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.959 [2024-12-05 20:12:05.256298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.959 "name": "raid_bdev1", 00:19:03.959 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:19:03.959 "strip_size_kb": 64, 00:19:03.959 "state": "online", 00:19:03.959 "raid_level": "raid5f", 00:19:03.959 "superblock": true, 00:19:03.959 "num_base_bdevs": 4, 00:19:03.959 "num_base_bdevs_discovered": 3, 00:19:03.959 "num_base_bdevs_operational": 3, 00:19:03.959 "base_bdevs_list": [ 00:19:03.959 { 00:19:03.959 "name": null, 00:19:03.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.959 "is_configured": false, 00:19:03.959 "data_offset": 0, 00:19:03.959 "data_size": 63488 00:19:03.959 }, 00:19:03.959 { 00:19:03.959 "name": "BaseBdev2", 00:19:03.959 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:19:03.959 "is_configured": true, 00:19:03.959 "data_offset": 2048, 00:19:03.959 "data_size": 63488 00:19:03.959 }, 00:19:03.959 { 00:19:03.959 "name": "BaseBdev3", 00:19:03.959 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:19:03.959 "is_configured": true, 00:19:03.959 "data_offset": 2048, 00:19:03.959 "data_size": 63488 00:19:03.959 }, 00:19:03.959 { 00:19:03.959 "name": "BaseBdev4", 00:19:03.959 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:19:03.959 "is_configured": true, 00:19:03.959 "data_offset": 2048, 00:19:03.959 "data_size": 63488 00:19:03.959 } 00:19:03.959 ] 00:19:03.959 }' 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.959 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.527 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:04.527 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.527 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.527 [2024-12-05 20:12:05.735474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:04.527 [2024-12-05 20:12:05.735663] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:04.527 [2024-12-05 20:12:05.735729] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:04.527 [2024-12-05 20:12:05.735802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:04.528 [2024-12-05 20:12:05.749844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:19:04.528 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.528 20:12:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:04.528 [2024-12-05 20:12:05.758538] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:05.504 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.504 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.504 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:05.504 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:05.505 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.505 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.505 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.505 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.505 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.505 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.505 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.505 "name": "raid_bdev1", 00:19:05.505 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:19:05.505 "strip_size_kb": 64, 00:19:05.505 "state": "online", 00:19:05.505 "raid_level": "raid5f", 00:19:05.505 "superblock": true, 00:19:05.505 "num_base_bdevs": 4, 00:19:05.505 "num_base_bdevs_discovered": 4, 00:19:05.505 "num_base_bdevs_operational": 4, 00:19:05.505 "process": { 00:19:05.505 "type": "rebuild", 00:19:05.505 "target": "spare", 00:19:05.505 "progress": { 00:19:05.505 "blocks": 19200, 00:19:05.505 "percent": 10 00:19:05.505 } 00:19:05.505 }, 00:19:05.505 "base_bdevs_list": [ 00:19:05.505 { 00:19:05.505 "name": "spare", 00:19:05.505 "uuid": "b8d52ed5-d0d2-52c2-bf4b-c525c004ef63", 00:19:05.505 "is_configured": true, 00:19:05.505 "data_offset": 2048, 00:19:05.505 "data_size": 63488 00:19:05.505 }, 00:19:05.505 { 00:19:05.505 "name": "BaseBdev2", 00:19:05.505 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:19:05.505 "is_configured": true, 00:19:05.505 "data_offset": 2048, 00:19:05.505 "data_size": 63488 00:19:05.505 }, 00:19:05.505 { 00:19:05.505 "name": "BaseBdev3", 00:19:05.505 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:19:05.505 "is_configured": true, 00:19:05.505 "data_offset": 2048, 00:19:05.505 "data_size": 63488 00:19:05.505 }, 00:19:05.505 { 00:19:05.505 "name": "BaseBdev4", 00:19:05.505 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:19:05.505 "is_configured": true, 00:19:05.505 "data_offset": 2048, 00:19:05.505 "data_size": 63488 00:19:05.505 } 00:19:05.505 ] 00:19:05.505 }' 00:19:05.505 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.505 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.505 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.505 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.505 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:05.505 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.505 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.505 [2024-12-05 20:12:06.913424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:05.765 [2024-12-05 20:12:06.964355] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:05.765 [2024-12-05 20:12:06.964415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.765 [2024-12-05 20:12:06.964431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:05.765 [2024-12-05 20:12:06.964440] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:05.765 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.765 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:05.765 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.765 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.765 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:05.765 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.765 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:05.765 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.765 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.765 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.765 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.765 20:12:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.765 20:12:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.765 20:12:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.765 20:12:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.765 20:12:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.765 20:12:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.765 "name": "raid_bdev1", 00:19:05.765 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:19:05.765 "strip_size_kb": 64, 00:19:05.765 "state": "online", 00:19:05.765 "raid_level": "raid5f", 00:19:05.765 "superblock": true, 00:19:05.765 "num_base_bdevs": 4, 00:19:05.765 "num_base_bdevs_discovered": 3, 00:19:05.765 "num_base_bdevs_operational": 3, 00:19:05.765 "base_bdevs_list": [ 00:19:05.765 { 00:19:05.765 "name": null, 00:19:05.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.765 "is_configured": false, 00:19:05.765 "data_offset": 0, 00:19:05.765 "data_size": 63488 00:19:05.765 }, 00:19:05.765 { 00:19:05.765 "name": "BaseBdev2", 00:19:05.765 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:19:05.765 "is_configured": true, 00:19:05.765 "data_offset": 2048, 00:19:05.765 "data_size": 63488 00:19:05.765 }, 00:19:05.765 { 00:19:05.765 "name": "BaseBdev3", 00:19:05.765 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:19:05.765 "is_configured": true, 00:19:05.765 "data_offset": 2048, 00:19:05.765 "data_size": 63488 00:19:05.765 }, 00:19:05.765 { 00:19:05.765 "name": "BaseBdev4", 00:19:05.765 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:19:05.765 "is_configured": true, 00:19:05.765 "data_offset": 2048, 00:19:05.765 "data_size": 63488 00:19:05.765 } 00:19:05.765 ] 00:19:05.765 }' 00:19:05.765 20:12:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.765 20:12:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.023 20:12:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:06.023 20:12:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.023 20:12:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.282 [2024-12-05 20:12:07.460500] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:06.282 [2024-12-05 20:12:07.460606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.282 [2024-12-05 20:12:07.460651] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:06.282 [2024-12-05 20:12:07.460706] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.282 [2024-12-05 20:12:07.461282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.282 [2024-12-05 20:12:07.461355] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:06.282 [2024-12-05 20:12:07.461478] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:06.282 [2024-12-05 20:12:07.461530] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:06.283 [2024-12-05 20:12:07.461587] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:06.283 [2024-12-05 20:12:07.461646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:06.283 [2024-12-05 20:12:07.476167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:19:06.283 spare 00:19:06.283 20:12:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.283 20:12:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:06.283 [2024-12-05 20:12:07.484831] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.236 "name": "raid_bdev1", 00:19:07.236 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:19:07.236 "strip_size_kb": 64, 00:19:07.236 "state": "online", 00:19:07.236 "raid_level": "raid5f", 00:19:07.236 "superblock": true, 00:19:07.236 "num_base_bdevs": 4, 00:19:07.236 "num_base_bdevs_discovered": 4, 00:19:07.236 "num_base_bdevs_operational": 4, 00:19:07.236 "process": { 00:19:07.236 "type": "rebuild", 00:19:07.236 "target": "spare", 00:19:07.236 "progress": { 00:19:07.236 "blocks": 19200, 00:19:07.236 "percent": 10 00:19:07.236 } 00:19:07.236 }, 00:19:07.236 "base_bdevs_list": [ 00:19:07.236 { 00:19:07.236 "name": "spare", 00:19:07.236 "uuid": "b8d52ed5-d0d2-52c2-bf4b-c525c004ef63", 00:19:07.236 "is_configured": true, 00:19:07.236 "data_offset": 2048, 00:19:07.236 "data_size": 63488 00:19:07.236 }, 00:19:07.236 { 00:19:07.236 "name": "BaseBdev2", 00:19:07.236 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:19:07.236 "is_configured": true, 00:19:07.236 "data_offset": 2048, 00:19:07.236 "data_size": 63488 00:19:07.236 }, 00:19:07.236 { 00:19:07.236 "name": "BaseBdev3", 00:19:07.236 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:19:07.236 "is_configured": true, 00:19:07.236 "data_offset": 2048, 00:19:07.236 "data_size": 63488 00:19:07.236 }, 00:19:07.236 { 00:19:07.236 "name": "BaseBdev4", 00:19:07.236 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:19:07.236 "is_configured": true, 00:19:07.236 "data_offset": 2048, 00:19:07.236 "data_size": 63488 00:19:07.236 } 00:19:07.236 ] 00:19:07.236 }' 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.236 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.236 [2024-12-05 20:12:08.639598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:07.496 [2024-12-05 20:12:08.690574] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:07.496 [2024-12-05 20:12:08.690623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.496 [2024-12-05 20:12:08.690641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:07.496 [2024-12-05 20:12:08.690647] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.496 "name": "raid_bdev1", 00:19:07.496 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:19:07.496 "strip_size_kb": 64, 00:19:07.496 "state": "online", 00:19:07.496 "raid_level": "raid5f", 00:19:07.496 "superblock": true, 00:19:07.496 "num_base_bdevs": 4, 00:19:07.496 "num_base_bdevs_discovered": 3, 00:19:07.496 "num_base_bdevs_operational": 3, 00:19:07.496 "base_bdevs_list": [ 00:19:07.496 { 00:19:07.496 "name": null, 00:19:07.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.496 "is_configured": false, 00:19:07.496 "data_offset": 0, 00:19:07.496 "data_size": 63488 00:19:07.496 }, 00:19:07.496 { 00:19:07.496 "name": "BaseBdev2", 00:19:07.496 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:19:07.496 "is_configured": true, 00:19:07.496 "data_offset": 2048, 00:19:07.496 "data_size": 63488 00:19:07.496 }, 00:19:07.496 { 00:19:07.496 "name": "BaseBdev3", 00:19:07.496 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:19:07.496 "is_configured": true, 00:19:07.496 "data_offset": 2048, 00:19:07.496 "data_size": 63488 00:19:07.496 }, 00:19:07.496 { 00:19:07.496 "name": "BaseBdev4", 00:19:07.496 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:19:07.496 "is_configured": true, 00:19:07.496 "data_offset": 2048, 00:19:07.496 "data_size": 63488 00:19:07.496 } 00:19:07.496 ] 00:19:07.496 }' 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.496 20:12:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.066 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:08.066 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.066 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:08.066 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.067 "name": "raid_bdev1", 00:19:08.067 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:19:08.067 "strip_size_kb": 64, 00:19:08.067 "state": "online", 00:19:08.067 "raid_level": "raid5f", 00:19:08.067 "superblock": true, 00:19:08.067 "num_base_bdevs": 4, 00:19:08.067 "num_base_bdevs_discovered": 3, 00:19:08.067 "num_base_bdevs_operational": 3, 00:19:08.067 "base_bdevs_list": [ 00:19:08.067 { 00:19:08.067 "name": null, 00:19:08.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.067 "is_configured": false, 00:19:08.067 "data_offset": 0, 00:19:08.067 "data_size": 63488 00:19:08.067 }, 00:19:08.067 { 00:19:08.067 "name": "BaseBdev2", 00:19:08.067 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:19:08.067 "is_configured": true, 00:19:08.067 "data_offset": 2048, 00:19:08.067 "data_size": 63488 00:19:08.067 }, 00:19:08.067 { 00:19:08.067 "name": "BaseBdev3", 00:19:08.067 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:19:08.067 "is_configured": true, 00:19:08.067 "data_offset": 2048, 00:19:08.067 "data_size": 63488 00:19:08.067 }, 00:19:08.067 { 00:19:08.067 "name": "BaseBdev4", 00:19:08.067 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:19:08.067 "is_configured": true, 00:19:08.067 "data_offset": 2048, 00:19:08.067 "data_size": 63488 00:19:08.067 } 00:19:08.067 ] 00:19:08.067 }' 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.067 [2024-12-05 20:12:09.334119] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:08.067 [2024-12-05 20:12:09.334222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.067 [2024-12-05 20:12:09.334248] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:19:08.067 [2024-12-05 20:12:09.334257] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.067 [2024-12-05 20:12:09.334683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.067 [2024-12-05 20:12:09.334701] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:08.067 [2024-12-05 20:12:09.334772] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:08.067 [2024-12-05 20:12:09.334785] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:08.067 [2024-12-05 20:12:09.334796] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:08.067 [2024-12-05 20:12:09.334805] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:08.067 BaseBdev1 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.067 20:12:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:09.007 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:09.007 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.007 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.007 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:09.007 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.007 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:09.007 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.007 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.007 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.007 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.007 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.007 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.007 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.007 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.007 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.007 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.007 "name": "raid_bdev1", 00:19:09.007 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:19:09.007 "strip_size_kb": 64, 00:19:09.008 "state": "online", 00:19:09.008 "raid_level": "raid5f", 00:19:09.008 "superblock": true, 00:19:09.008 "num_base_bdevs": 4, 00:19:09.008 "num_base_bdevs_discovered": 3, 00:19:09.008 "num_base_bdevs_operational": 3, 00:19:09.008 "base_bdevs_list": [ 00:19:09.008 { 00:19:09.008 "name": null, 00:19:09.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.008 "is_configured": false, 00:19:09.008 "data_offset": 0, 00:19:09.008 "data_size": 63488 00:19:09.008 }, 00:19:09.008 { 00:19:09.008 "name": "BaseBdev2", 00:19:09.008 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:19:09.008 "is_configured": true, 00:19:09.008 "data_offset": 2048, 00:19:09.008 "data_size": 63488 00:19:09.008 }, 00:19:09.008 { 00:19:09.008 "name": "BaseBdev3", 00:19:09.008 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:19:09.008 "is_configured": true, 00:19:09.008 "data_offset": 2048, 00:19:09.008 "data_size": 63488 00:19:09.008 }, 00:19:09.008 { 00:19:09.008 "name": "BaseBdev4", 00:19:09.008 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:19:09.008 "is_configured": true, 00:19:09.008 "data_offset": 2048, 00:19:09.008 "data_size": 63488 00:19:09.008 } 00:19:09.008 ] 00:19:09.008 }' 00:19:09.008 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.008 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.577 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:09.577 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.577 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:09.577 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:09.577 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.577 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.577 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.577 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.577 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.577 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.577 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.577 "name": "raid_bdev1", 00:19:09.577 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:19:09.577 "strip_size_kb": 64, 00:19:09.577 "state": "online", 00:19:09.577 "raid_level": "raid5f", 00:19:09.577 "superblock": true, 00:19:09.577 "num_base_bdevs": 4, 00:19:09.577 "num_base_bdevs_discovered": 3, 00:19:09.578 "num_base_bdevs_operational": 3, 00:19:09.578 "base_bdevs_list": [ 00:19:09.578 { 00:19:09.578 "name": null, 00:19:09.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.578 "is_configured": false, 00:19:09.578 "data_offset": 0, 00:19:09.578 "data_size": 63488 00:19:09.578 }, 00:19:09.578 { 00:19:09.578 "name": "BaseBdev2", 00:19:09.578 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:19:09.578 "is_configured": true, 00:19:09.578 "data_offset": 2048, 00:19:09.578 "data_size": 63488 00:19:09.578 }, 00:19:09.578 { 00:19:09.578 "name": "BaseBdev3", 00:19:09.578 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:19:09.578 "is_configured": true, 00:19:09.578 "data_offset": 2048, 00:19:09.578 "data_size": 63488 00:19:09.578 }, 00:19:09.578 { 00:19:09.578 "name": "BaseBdev4", 00:19:09.578 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:19:09.578 "is_configured": true, 00:19:09.578 "data_offset": 2048, 00:19:09.578 "data_size": 63488 00:19:09.578 } 00:19:09.578 ] 00:19:09.578 }' 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.578 [2024-12-05 20:12:10.879674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:09.578 [2024-12-05 20:12:10.879812] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:09.578 [2024-12-05 20:12:10.879831] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:09.578 request: 00:19:09.578 { 00:19:09.578 "base_bdev": "BaseBdev1", 00:19:09.578 "raid_bdev": "raid_bdev1", 00:19:09.578 "method": "bdev_raid_add_base_bdev", 00:19:09.578 "req_id": 1 00:19:09.578 } 00:19:09.578 Got JSON-RPC error response 00:19:09.578 response: 00:19:09.578 { 00:19:09.578 "code": -22, 00:19:09.578 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:09.578 } 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:09.578 20:12:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.517 "name": "raid_bdev1", 00:19:10.517 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:19:10.517 "strip_size_kb": 64, 00:19:10.517 "state": "online", 00:19:10.517 "raid_level": "raid5f", 00:19:10.517 "superblock": true, 00:19:10.517 "num_base_bdevs": 4, 00:19:10.517 "num_base_bdevs_discovered": 3, 00:19:10.517 "num_base_bdevs_operational": 3, 00:19:10.517 "base_bdevs_list": [ 00:19:10.517 { 00:19:10.517 "name": null, 00:19:10.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.517 "is_configured": false, 00:19:10.517 "data_offset": 0, 00:19:10.517 "data_size": 63488 00:19:10.517 }, 00:19:10.517 { 00:19:10.517 "name": "BaseBdev2", 00:19:10.517 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:19:10.517 "is_configured": true, 00:19:10.517 "data_offset": 2048, 00:19:10.517 "data_size": 63488 00:19:10.517 }, 00:19:10.517 { 00:19:10.517 "name": "BaseBdev3", 00:19:10.517 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:19:10.517 "is_configured": true, 00:19:10.517 "data_offset": 2048, 00:19:10.517 "data_size": 63488 00:19:10.517 }, 00:19:10.517 { 00:19:10.517 "name": "BaseBdev4", 00:19:10.517 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:19:10.517 "is_configured": true, 00:19:10.517 "data_offset": 2048, 00:19:10.517 "data_size": 63488 00:19:10.517 } 00:19:10.517 ] 00:19:10.517 }' 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.517 20:12:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.087 "name": "raid_bdev1", 00:19:11.087 "uuid": "b275d015-e0ea-4981-a727-5beb188efd2b", 00:19:11.087 "strip_size_kb": 64, 00:19:11.087 "state": "online", 00:19:11.087 "raid_level": "raid5f", 00:19:11.087 "superblock": true, 00:19:11.087 "num_base_bdevs": 4, 00:19:11.087 "num_base_bdevs_discovered": 3, 00:19:11.087 "num_base_bdevs_operational": 3, 00:19:11.087 "base_bdevs_list": [ 00:19:11.087 { 00:19:11.087 "name": null, 00:19:11.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.087 "is_configured": false, 00:19:11.087 "data_offset": 0, 00:19:11.087 "data_size": 63488 00:19:11.087 }, 00:19:11.087 { 00:19:11.087 "name": "BaseBdev2", 00:19:11.087 "uuid": "8a1acbf5-0b00-5607-984a-b7a14c85bf6b", 00:19:11.087 "is_configured": true, 00:19:11.087 "data_offset": 2048, 00:19:11.087 "data_size": 63488 00:19:11.087 }, 00:19:11.087 { 00:19:11.087 "name": "BaseBdev3", 00:19:11.087 "uuid": "be4fb2b3-265a-5550-9116-72bfcb12427e", 00:19:11.087 "is_configured": true, 00:19:11.087 "data_offset": 2048, 00:19:11.087 "data_size": 63488 00:19:11.087 }, 00:19:11.087 { 00:19:11.087 "name": "BaseBdev4", 00:19:11.087 "uuid": "de0dcf5e-a7bd-5c37-83fe-a8aa1914e8ef", 00:19:11.087 "is_configured": true, 00:19:11.087 "data_offset": 2048, 00:19:11.087 "data_size": 63488 00:19:11.087 } 00:19:11.087 ] 00:19:11.087 }' 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85191 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85191 ']' 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85191 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.087 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85191 00:19:11.347 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.347 killing process with pid 85191 00:19:11.347 Received shutdown signal, test time was about 60.000000 seconds 00:19:11.347 00:19:11.347 Latency(us) 00:19:11.347 [2024-12-05T20:12:12.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.347 [2024-12-05T20:12:12.784Z] =================================================================================================================== 00:19:11.347 [2024-12-05T20:12:12.784Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:11.347 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.347 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85191' 00:19:11.347 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85191 00:19:11.347 [2024-12-05 20:12:12.529539] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:11.347 [2024-12-05 20:12:12.529652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:11.347 20:12:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85191 00:19:11.347 [2024-12-05 20:12:12.529730] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:11.347 [2024-12-05 20:12:12.529743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:11.606 [2024-12-05 20:12:12.989787] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:12.988 20:12:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:12.988 00:19:12.988 real 0m26.840s 00:19:12.988 user 0m33.751s 00:19:12.988 sys 0m3.060s 00:19:12.988 20:12:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.988 20:12:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.988 ************************************ 00:19:12.988 END TEST raid5f_rebuild_test_sb 00:19:12.988 ************************************ 00:19:12.988 20:12:14 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:19:12.988 20:12:14 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:19:12.988 20:12:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:12.988 20:12:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.988 20:12:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:12.988 ************************************ 00:19:12.988 START TEST raid_state_function_test_sb_4k 00:19:12.988 ************************************ 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85999 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:12.988 Process raid pid: 85999 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85999' 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85999 00:19:12.988 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85999 ']' 00:19:12.989 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.989 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.989 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.989 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.989 20:12:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.989 [2024-12-05 20:12:14.232124] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:19:12.989 [2024-12-05 20:12:14.232238] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.989 [2024-12-05 20:12:14.413313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.249 [2024-12-05 20:12:14.524633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.508 [2024-12-05 20:12:14.716814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:13.508 [2024-12-05 20:12:14.716852] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.767 [2024-12-05 20:12:15.061338] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:13.767 [2024-12-05 20:12:15.061394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:13.767 [2024-12-05 20:12:15.061405] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:13.767 [2024-12-05 20:12:15.061414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.767 "name": "Existed_Raid", 00:19:13.767 "uuid": "2f81b984-bbab-4411-9d25-2d90319b5afc", 00:19:13.767 "strip_size_kb": 0, 00:19:13.767 "state": "configuring", 00:19:13.767 "raid_level": "raid1", 00:19:13.767 "superblock": true, 00:19:13.767 "num_base_bdevs": 2, 00:19:13.767 "num_base_bdevs_discovered": 0, 00:19:13.767 "num_base_bdevs_operational": 2, 00:19:13.767 "base_bdevs_list": [ 00:19:13.767 { 00:19:13.767 "name": "BaseBdev1", 00:19:13.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.767 "is_configured": false, 00:19:13.767 "data_offset": 0, 00:19:13.767 "data_size": 0 00:19:13.767 }, 00:19:13.767 { 00:19:13.767 "name": "BaseBdev2", 00:19:13.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.767 "is_configured": false, 00:19:13.767 "data_offset": 0, 00:19:13.767 "data_size": 0 00:19:13.767 } 00:19:13.767 ] 00:19:13.767 }' 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.767 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.334 [2024-12-05 20:12:15.524613] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:14.334 [2024-12-05 20:12:15.524688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.334 [2024-12-05 20:12:15.536591] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:14.334 [2024-12-05 20:12:15.536661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:14.334 [2024-12-05 20:12:15.536686] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:14.334 [2024-12-05 20:12:15.536740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.334 [2024-12-05 20:12:15.580334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:14.334 BaseBdev1 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.334 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.334 [ 00:19:14.334 { 00:19:14.334 "name": "BaseBdev1", 00:19:14.334 "aliases": [ 00:19:14.334 "54b55d09-e928-4728-89ce-e21d4623fdcb" 00:19:14.334 ], 00:19:14.334 "product_name": "Malloc disk", 00:19:14.334 "block_size": 4096, 00:19:14.334 "num_blocks": 8192, 00:19:14.334 "uuid": "54b55d09-e928-4728-89ce-e21d4623fdcb", 00:19:14.334 "assigned_rate_limits": { 00:19:14.334 "rw_ios_per_sec": 0, 00:19:14.334 "rw_mbytes_per_sec": 0, 00:19:14.334 "r_mbytes_per_sec": 0, 00:19:14.334 "w_mbytes_per_sec": 0 00:19:14.334 }, 00:19:14.334 "claimed": true, 00:19:14.334 "claim_type": "exclusive_write", 00:19:14.334 "zoned": false, 00:19:14.334 "supported_io_types": { 00:19:14.334 "read": true, 00:19:14.334 "write": true, 00:19:14.334 "unmap": true, 00:19:14.334 "flush": true, 00:19:14.334 "reset": true, 00:19:14.334 "nvme_admin": false, 00:19:14.334 "nvme_io": false, 00:19:14.334 "nvme_io_md": false, 00:19:14.334 "write_zeroes": true, 00:19:14.334 "zcopy": true, 00:19:14.334 "get_zone_info": false, 00:19:14.334 "zone_management": false, 00:19:14.334 "zone_append": false, 00:19:14.334 "compare": false, 00:19:14.334 "compare_and_write": false, 00:19:14.334 "abort": true, 00:19:14.334 "seek_hole": false, 00:19:14.334 "seek_data": false, 00:19:14.334 "copy": true, 00:19:14.334 "nvme_iov_md": false 00:19:14.334 }, 00:19:14.334 "memory_domains": [ 00:19:14.334 { 00:19:14.334 "dma_device_id": "system", 00:19:14.334 "dma_device_type": 1 00:19:14.335 }, 00:19:14.335 { 00:19:14.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.335 "dma_device_type": 2 00:19:14.335 } 00:19:14.335 ], 00:19:14.335 "driver_specific": {} 00:19:14.335 } 00:19:14.335 ] 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.335 "name": "Existed_Raid", 00:19:14.335 "uuid": "2a083357-c3d9-4524-80b1-df434afef368", 00:19:14.335 "strip_size_kb": 0, 00:19:14.335 "state": "configuring", 00:19:14.335 "raid_level": "raid1", 00:19:14.335 "superblock": true, 00:19:14.335 "num_base_bdevs": 2, 00:19:14.335 "num_base_bdevs_discovered": 1, 00:19:14.335 "num_base_bdevs_operational": 2, 00:19:14.335 "base_bdevs_list": [ 00:19:14.335 { 00:19:14.335 "name": "BaseBdev1", 00:19:14.335 "uuid": "54b55d09-e928-4728-89ce-e21d4623fdcb", 00:19:14.335 "is_configured": true, 00:19:14.335 "data_offset": 256, 00:19:14.335 "data_size": 7936 00:19:14.335 }, 00:19:14.335 { 00:19:14.335 "name": "BaseBdev2", 00:19:14.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.335 "is_configured": false, 00:19:14.335 "data_offset": 0, 00:19:14.335 "data_size": 0 00:19:14.335 } 00:19:14.335 ] 00:19:14.335 }' 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.335 20:12:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.903 [2024-12-05 20:12:16.087552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:14.903 [2024-12-05 20:12:16.087591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.903 [2024-12-05 20:12:16.099565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:14.903 [2024-12-05 20:12:16.101332] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:14.903 [2024-12-05 20:12:16.101374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.903 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.904 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.904 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.904 "name": "Existed_Raid", 00:19:14.904 "uuid": "301e0359-e1df-4367-ba95-51f5c5cdd37b", 00:19:14.904 "strip_size_kb": 0, 00:19:14.904 "state": "configuring", 00:19:14.904 "raid_level": "raid1", 00:19:14.904 "superblock": true, 00:19:14.904 "num_base_bdevs": 2, 00:19:14.904 "num_base_bdevs_discovered": 1, 00:19:14.904 "num_base_bdevs_operational": 2, 00:19:14.904 "base_bdevs_list": [ 00:19:14.904 { 00:19:14.904 "name": "BaseBdev1", 00:19:14.904 "uuid": "54b55d09-e928-4728-89ce-e21d4623fdcb", 00:19:14.904 "is_configured": true, 00:19:14.904 "data_offset": 256, 00:19:14.904 "data_size": 7936 00:19:14.904 }, 00:19:14.904 { 00:19:14.904 "name": "BaseBdev2", 00:19:14.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.904 "is_configured": false, 00:19:14.904 "data_offset": 0, 00:19:14.904 "data_size": 0 00:19:14.904 } 00:19:14.904 ] 00:19:14.904 }' 00:19:14.904 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.904 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.163 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:19:15.163 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.424 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.424 [2024-12-05 20:12:16.638737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:15.424 [2024-12-05 20:12:16.639081] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:15.424 [2024-12-05 20:12:16.639137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:15.424 [2024-12-05 20:12:16.639401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:15.424 [2024-12-05 20:12:16.639617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:15.424 [2024-12-05 20:12:16.639667] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:15.424 BaseBdev2 00:19:15.424 [2024-12-05 20:12:16.639852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.424 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.424 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:15.424 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:15.424 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:15.424 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:19:15.424 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:15.424 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:15.424 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:15.424 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.424 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.424 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.424 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:15.424 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.424 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.424 [ 00:19:15.424 { 00:19:15.424 "name": "BaseBdev2", 00:19:15.424 "aliases": [ 00:19:15.424 "d62dc38e-c0fb-4b3b-94d2-c2c154ade6ce" 00:19:15.424 ], 00:19:15.424 "product_name": "Malloc disk", 00:19:15.424 "block_size": 4096, 00:19:15.424 "num_blocks": 8192, 00:19:15.424 "uuid": "d62dc38e-c0fb-4b3b-94d2-c2c154ade6ce", 00:19:15.424 "assigned_rate_limits": { 00:19:15.424 "rw_ios_per_sec": 0, 00:19:15.424 "rw_mbytes_per_sec": 0, 00:19:15.424 "r_mbytes_per_sec": 0, 00:19:15.424 "w_mbytes_per_sec": 0 00:19:15.424 }, 00:19:15.424 "claimed": true, 00:19:15.424 "claim_type": "exclusive_write", 00:19:15.424 "zoned": false, 00:19:15.424 "supported_io_types": { 00:19:15.424 "read": true, 00:19:15.424 "write": true, 00:19:15.424 "unmap": true, 00:19:15.424 "flush": true, 00:19:15.424 "reset": true, 00:19:15.424 "nvme_admin": false, 00:19:15.424 "nvme_io": false, 00:19:15.424 "nvme_io_md": false, 00:19:15.424 "write_zeroes": true, 00:19:15.424 "zcopy": true, 00:19:15.424 "get_zone_info": false, 00:19:15.424 "zone_management": false, 00:19:15.424 "zone_append": false, 00:19:15.424 "compare": false, 00:19:15.424 "compare_and_write": false, 00:19:15.424 "abort": true, 00:19:15.424 "seek_hole": false, 00:19:15.424 "seek_data": false, 00:19:15.424 "copy": true, 00:19:15.424 "nvme_iov_md": false 00:19:15.424 }, 00:19:15.425 "memory_domains": [ 00:19:15.425 { 00:19:15.425 "dma_device_id": "system", 00:19:15.425 "dma_device_type": 1 00:19:15.425 }, 00:19:15.425 { 00:19:15.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.425 "dma_device_type": 2 00:19:15.425 } 00:19:15.425 ], 00:19:15.425 "driver_specific": {} 00:19:15.425 } 00:19:15.425 ] 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.425 "name": "Existed_Raid", 00:19:15.425 "uuid": "301e0359-e1df-4367-ba95-51f5c5cdd37b", 00:19:15.425 "strip_size_kb": 0, 00:19:15.425 "state": "online", 00:19:15.425 "raid_level": "raid1", 00:19:15.425 "superblock": true, 00:19:15.425 "num_base_bdevs": 2, 00:19:15.425 "num_base_bdevs_discovered": 2, 00:19:15.425 "num_base_bdevs_operational": 2, 00:19:15.425 "base_bdevs_list": [ 00:19:15.425 { 00:19:15.425 "name": "BaseBdev1", 00:19:15.425 "uuid": "54b55d09-e928-4728-89ce-e21d4623fdcb", 00:19:15.425 "is_configured": true, 00:19:15.425 "data_offset": 256, 00:19:15.425 "data_size": 7936 00:19:15.425 }, 00:19:15.425 { 00:19:15.425 "name": "BaseBdev2", 00:19:15.425 "uuid": "d62dc38e-c0fb-4b3b-94d2-c2c154ade6ce", 00:19:15.425 "is_configured": true, 00:19:15.425 "data_offset": 256, 00:19:15.425 "data_size": 7936 00:19:15.425 } 00:19:15.425 ] 00:19:15.425 }' 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.425 20:12:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.995 [2024-12-05 20:12:17.158166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:15.995 "name": "Existed_Raid", 00:19:15.995 "aliases": [ 00:19:15.995 "301e0359-e1df-4367-ba95-51f5c5cdd37b" 00:19:15.995 ], 00:19:15.995 "product_name": "Raid Volume", 00:19:15.995 "block_size": 4096, 00:19:15.995 "num_blocks": 7936, 00:19:15.995 "uuid": "301e0359-e1df-4367-ba95-51f5c5cdd37b", 00:19:15.995 "assigned_rate_limits": { 00:19:15.995 "rw_ios_per_sec": 0, 00:19:15.995 "rw_mbytes_per_sec": 0, 00:19:15.995 "r_mbytes_per_sec": 0, 00:19:15.995 "w_mbytes_per_sec": 0 00:19:15.995 }, 00:19:15.995 "claimed": false, 00:19:15.995 "zoned": false, 00:19:15.995 "supported_io_types": { 00:19:15.995 "read": true, 00:19:15.995 "write": true, 00:19:15.995 "unmap": false, 00:19:15.995 "flush": false, 00:19:15.995 "reset": true, 00:19:15.995 "nvme_admin": false, 00:19:15.995 "nvme_io": false, 00:19:15.995 "nvme_io_md": false, 00:19:15.995 "write_zeroes": true, 00:19:15.995 "zcopy": false, 00:19:15.995 "get_zone_info": false, 00:19:15.995 "zone_management": false, 00:19:15.995 "zone_append": false, 00:19:15.995 "compare": false, 00:19:15.995 "compare_and_write": false, 00:19:15.995 "abort": false, 00:19:15.995 "seek_hole": false, 00:19:15.995 "seek_data": false, 00:19:15.995 "copy": false, 00:19:15.995 "nvme_iov_md": false 00:19:15.995 }, 00:19:15.995 "memory_domains": [ 00:19:15.995 { 00:19:15.995 "dma_device_id": "system", 00:19:15.995 "dma_device_type": 1 00:19:15.995 }, 00:19:15.995 { 00:19:15.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.995 "dma_device_type": 2 00:19:15.995 }, 00:19:15.995 { 00:19:15.995 "dma_device_id": "system", 00:19:15.995 "dma_device_type": 1 00:19:15.995 }, 00:19:15.995 { 00:19:15.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.995 "dma_device_type": 2 00:19:15.995 } 00:19:15.995 ], 00:19:15.995 "driver_specific": { 00:19:15.995 "raid": { 00:19:15.995 "uuid": "301e0359-e1df-4367-ba95-51f5c5cdd37b", 00:19:15.995 "strip_size_kb": 0, 00:19:15.995 "state": "online", 00:19:15.995 "raid_level": "raid1", 00:19:15.995 "superblock": true, 00:19:15.995 "num_base_bdevs": 2, 00:19:15.995 "num_base_bdevs_discovered": 2, 00:19:15.995 "num_base_bdevs_operational": 2, 00:19:15.995 "base_bdevs_list": [ 00:19:15.995 { 00:19:15.995 "name": "BaseBdev1", 00:19:15.995 "uuid": "54b55d09-e928-4728-89ce-e21d4623fdcb", 00:19:15.995 "is_configured": true, 00:19:15.995 "data_offset": 256, 00:19:15.995 "data_size": 7936 00:19:15.995 }, 00:19:15.995 { 00:19:15.995 "name": "BaseBdev2", 00:19:15.995 "uuid": "d62dc38e-c0fb-4b3b-94d2-c2c154ade6ce", 00:19:15.995 "is_configured": true, 00:19:15.995 "data_offset": 256, 00:19:15.995 "data_size": 7936 00:19:15.995 } 00:19:15.995 ] 00:19:15.995 } 00:19:15.995 } 00:19:15.995 }' 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:15.995 BaseBdev2' 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.995 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:15.996 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:15.996 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:15.996 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.996 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.996 [2024-12-05 20:12:17.377571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.255 "name": "Existed_Raid", 00:19:16.255 "uuid": "301e0359-e1df-4367-ba95-51f5c5cdd37b", 00:19:16.255 "strip_size_kb": 0, 00:19:16.255 "state": "online", 00:19:16.255 "raid_level": "raid1", 00:19:16.255 "superblock": true, 00:19:16.255 "num_base_bdevs": 2, 00:19:16.255 "num_base_bdevs_discovered": 1, 00:19:16.255 "num_base_bdevs_operational": 1, 00:19:16.255 "base_bdevs_list": [ 00:19:16.255 { 00:19:16.255 "name": null, 00:19:16.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.255 "is_configured": false, 00:19:16.255 "data_offset": 0, 00:19:16.255 "data_size": 7936 00:19:16.255 }, 00:19:16.255 { 00:19:16.255 "name": "BaseBdev2", 00:19:16.255 "uuid": "d62dc38e-c0fb-4b3b-94d2-c2c154ade6ce", 00:19:16.255 "is_configured": true, 00:19:16.255 "data_offset": 256, 00:19:16.255 "data_size": 7936 00:19:16.255 } 00:19:16.255 ] 00:19:16.255 }' 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.255 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.515 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:16.515 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:16.515 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:16.515 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.515 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.515 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.774 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.775 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:16.775 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:16.775 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:16.775 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.775 20:12:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.775 [2024-12-05 20:12:17.980233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:16.775 [2024-12-05 20:12:17.980400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:16.775 [2024-12-05 20:12:18.069300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.775 [2024-12-05 20:12:18.069352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:16.775 [2024-12-05 20:12:18.069363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85999 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85999 ']' 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85999 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85999 00:19:16.775 killing process with pid 85999 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85999' 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85999 00:19:16.775 [2024-12-05 20:12:18.163716] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:16.775 20:12:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85999 00:19:16.775 [2024-12-05 20:12:18.179450] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:18.158 20:12:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:19:18.158 00:19:18.158 real 0m5.123s 00:19:18.158 user 0m7.435s 00:19:18.158 sys 0m0.930s 00:19:18.158 20:12:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.158 20:12:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.158 ************************************ 00:19:18.158 END TEST raid_state_function_test_sb_4k 00:19:18.158 ************************************ 00:19:18.158 20:12:19 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:19:18.158 20:12:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:18.158 20:12:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.158 20:12:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:18.158 ************************************ 00:19:18.158 START TEST raid_superblock_test_4k 00:19:18.158 ************************************ 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86251 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86251 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86251 ']' 00:19:18.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.158 20:12:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.158 [2024-12-05 20:12:19.417696] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:19:18.158 [2024-12-05 20:12:19.417804] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86251 ] 00:19:18.158 [2024-12-05 20:12:19.590560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.416 [2024-12-05 20:12:19.692825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.675 [2024-12-05 20:12:19.882286] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:18.675 [2024-12-05 20:12:19.882337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.935 malloc1 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.935 [2024-12-05 20:12:20.270483] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:18.935 [2024-12-05 20:12:20.270546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.935 [2024-12-05 20:12:20.270569] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:18.935 [2024-12-05 20:12:20.270579] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.935 [2024-12-05 20:12:20.272774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.935 [2024-12-05 20:12:20.272815] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:18.935 pt1 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.935 malloc2 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.935 [2024-12-05 20:12:20.326395] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:18.935 [2024-12-05 20:12:20.326448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.935 [2024-12-05 20:12:20.326473] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:18.935 [2024-12-05 20:12:20.326481] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.935 [2024-12-05 20:12:20.328476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.935 [2024-12-05 20:12:20.328512] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:18.935 pt2 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.935 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.936 [2024-12-05 20:12:20.338422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:18.936 [2024-12-05 20:12:20.340143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:18.936 [2024-12-05 20:12:20.340305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:18.936 [2024-12-05 20:12:20.340321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:18.936 [2024-12-05 20:12:20.340540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:18.936 [2024-12-05 20:12:20.340735] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:18.936 [2024-12-05 20:12:20.340762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:18.936 [2024-12-05 20:12:20.340894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.936 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.195 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.196 "name": "raid_bdev1", 00:19:19.196 "uuid": "aa0c1364-4314-4a14-9d7e-1b89d4fa380f", 00:19:19.196 "strip_size_kb": 0, 00:19:19.196 "state": "online", 00:19:19.196 "raid_level": "raid1", 00:19:19.196 "superblock": true, 00:19:19.196 "num_base_bdevs": 2, 00:19:19.196 "num_base_bdevs_discovered": 2, 00:19:19.196 "num_base_bdevs_operational": 2, 00:19:19.196 "base_bdevs_list": [ 00:19:19.196 { 00:19:19.196 "name": "pt1", 00:19:19.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:19.196 "is_configured": true, 00:19:19.196 "data_offset": 256, 00:19:19.196 "data_size": 7936 00:19:19.196 }, 00:19:19.196 { 00:19:19.196 "name": "pt2", 00:19:19.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:19.196 "is_configured": true, 00:19:19.196 "data_offset": 256, 00:19:19.196 "data_size": 7936 00:19:19.196 } 00:19:19.196 ] 00:19:19.196 }' 00:19:19.196 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.196 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.455 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:19.455 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:19.455 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:19.455 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:19.455 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:19.455 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:19.455 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:19.455 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:19.455 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.456 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.456 [2024-12-05 20:12:20.805869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:19.456 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.456 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:19.456 "name": "raid_bdev1", 00:19:19.456 "aliases": [ 00:19:19.456 "aa0c1364-4314-4a14-9d7e-1b89d4fa380f" 00:19:19.456 ], 00:19:19.456 "product_name": "Raid Volume", 00:19:19.456 "block_size": 4096, 00:19:19.456 "num_blocks": 7936, 00:19:19.456 "uuid": "aa0c1364-4314-4a14-9d7e-1b89d4fa380f", 00:19:19.456 "assigned_rate_limits": { 00:19:19.456 "rw_ios_per_sec": 0, 00:19:19.456 "rw_mbytes_per_sec": 0, 00:19:19.456 "r_mbytes_per_sec": 0, 00:19:19.456 "w_mbytes_per_sec": 0 00:19:19.456 }, 00:19:19.456 "claimed": false, 00:19:19.456 "zoned": false, 00:19:19.456 "supported_io_types": { 00:19:19.456 "read": true, 00:19:19.456 "write": true, 00:19:19.456 "unmap": false, 00:19:19.456 "flush": false, 00:19:19.456 "reset": true, 00:19:19.456 "nvme_admin": false, 00:19:19.456 "nvme_io": false, 00:19:19.456 "nvme_io_md": false, 00:19:19.456 "write_zeroes": true, 00:19:19.456 "zcopy": false, 00:19:19.456 "get_zone_info": false, 00:19:19.456 "zone_management": false, 00:19:19.456 "zone_append": false, 00:19:19.456 "compare": false, 00:19:19.456 "compare_and_write": false, 00:19:19.456 "abort": false, 00:19:19.456 "seek_hole": false, 00:19:19.456 "seek_data": false, 00:19:19.456 "copy": false, 00:19:19.456 "nvme_iov_md": false 00:19:19.456 }, 00:19:19.456 "memory_domains": [ 00:19:19.456 { 00:19:19.456 "dma_device_id": "system", 00:19:19.456 "dma_device_type": 1 00:19:19.456 }, 00:19:19.456 { 00:19:19.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.456 "dma_device_type": 2 00:19:19.456 }, 00:19:19.456 { 00:19:19.456 "dma_device_id": "system", 00:19:19.456 "dma_device_type": 1 00:19:19.456 }, 00:19:19.456 { 00:19:19.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.456 "dma_device_type": 2 00:19:19.456 } 00:19:19.456 ], 00:19:19.456 "driver_specific": { 00:19:19.456 "raid": { 00:19:19.456 "uuid": "aa0c1364-4314-4a14-9d7e-1b89d4fa380f", 00:19:19.456 "strip_size_kb": 0, 00:19:19.456 "state": "online", 00:19:19.456 "raid_level": "raid1", 00:19:19.456 "superblock": true, 00:19:19.456 "num_base_bdevs": 2, 00:19:19.456 "num_base_bdevs_discovered": 2, 00:19:19.456 "num_base_bdevs_operational": 2, 00:19:19.456 "base_bdevs_list": [ 00:19:19.456 { 00:19:19.456 "name": "pt1", 00:19:19.456 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:19.456 "is_configured": true, 00:19:19.456 "data_offset": 256, 00:19:19.456 "data_size": 7936 00:19:19.456 }, 00:19:19.456 { 00:19:19.456 "name": "pt2", 00:19:19.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:19.456 "is_configured": true, 00:19:19.456 "data_offset": 256, 00:19:19.456 "data_size": 7936 00:19:19.456 } 00:19:19.456 ] 00:19:19.456 } 00:19:19.456 } 00:19:19.456 }' 00:19:19.456 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:19.716 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:19.716 pt2' 00:19:19.716 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.716 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:19.716 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:19.716 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:19.716 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.716 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.716 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.716 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.716 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:19.716 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:19.716 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:19.716 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:19.716 20:12:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.716 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.716 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.716 20:12:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:19.716 [2024-12-05 20:12:21.025475] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=aa0c1364-4314-4a14-9d7e-1b89d4fa380f 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z aa0c1364-4314-4a14-9d7e-1b89d4fa380f ']' 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.716 [2024-12-05 20:12:21.073142] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:19.716 [2024-12-05 20:12:21.073164] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:19.716 [2024-12-05 20:12:21.073227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:19.716 [2024-12-05 20:12:21.073276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:19.716 [2024-12-05 20:12:21.073287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.716 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.977 [2024-12-05 20:12:21.216933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:19.977 [2024-12-05 20:12:21.218759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:19.977 [2024-12-05 20:12:21.218881] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:19.977 [2024-12-05 20:12:21.219003] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:19.977 [2024-12-05 20:12:21.219065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:19.977 [2024-12-05 20:12:21.219098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:19.977 request: 00:19:19.977 { 00:19:19.977 "name": "raid_bdev1", 00:19:19.977 "raid_level": "raid1", 00:19:19.977 "base_bdevs": [ 00:19:19.977 "malloc1", 00:19:19.977 "malloc2" 00:19:19.977 ], 00:19:19.977 "superblock": false, 00:19:19.977 "method": "bdev_raid_create", 00:19:19.977 "req_id": 1 00:19:19.977 } 00:19:19.977 Got JSON-RPC error response 00:19:19.977 response: 00:19:19.977 { 00:19:19.977 "code": -17, 00:19:19.977 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:19.977 } 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.977 [2024-12-05 20:12:21.284827] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:19.977 [2024-12-05 20:12:21.284925] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.977 [2024-12-05 20:12:21.284974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:19.977 [2024-12-05 20:12:21.285010] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.977 [2024-12-05 20:12:21.287115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.977 [2024-12-05 20:12:21.287186] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:19.977 [2024-12-05 20:12:21.287281] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:19.977 [2024-12-05 20:12:21.287361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:19.977 pt1 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.977 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.977 "name": "raid_bdev1", 00:19:19.977 "uuid": "aa0c1364-4314-4a14-9d7e-1b89d4fa380f", 00:19:19.977 "strip_size_kb": 0, 00:19:19.977 "state": "configuring", 00:19:19.977 "raid_level": "raid1", 00:19:19.977 "superblock": true, 00:19:19.977 "num_base_bdevs": 2, 00:19:19.977 "num_base_bdevs_discovered": 1, 00:19:19.977 "num_base_bdevs_operational": 2, 00:19:19.977 "base_bdevs_list": [ 00:19:19.977 { 00:19:19.977 "name": "pt1", 00:19:19.977 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:19.977 "is_configured": true, 00:19:19.978 "data_offset": 256, 00:19:19.978 "data_size": 7936 00:19:19.978 }, 00:19:19.978 { 00:19:19.978 "name": null, 00:19:19.978 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:19.978 "is_configured": false, 00:19:19.978 "data_offset": 256, 00:19:19.978 "data_size": 7936 00:19:19.978 } 00:19:19.978 ] 00:19:19.978 }' 00:19:19.978 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.978 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.546 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:20.546 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:20.546 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:20.546 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:20.546 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.546 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.546 [2024-12-05 20:12:21.732203] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:20.546 [2024-12-05 20:12:21.732299] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.546 [2024-12-05 20:12:21.732320] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:20.546 [2024-12-05 20:12:21.732330] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.546 [2024-12-05 20:12:21.732676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.546 [2024-12-05 20:12:21.732707] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:20.546 [2024-12-05 20:12:21.732783] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:20.546 [2024-12-05 20:12:21.732805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:20.546 [2024-12-05 20:12:21.732932] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:20.546 [2024-12-05 20:12:21.732944] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:20.546 [2024-12-05 20:12:21.733166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:20.546 [2024-12-05 20:12:21.733316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:20.546 [2024-12-05 20:12:21.733334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:20.546 [2024-12-05 20:12:21.733462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.546 pt2 00:19:20.546 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.546 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:20.546 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:20.546 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:20.546 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.546 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.546 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.547 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.547 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:20.547 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.547 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.547 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.547 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.547 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.547 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.547 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.547 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.547 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.547 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.547 "name": "raid_bdev1", 00:19:20.547 "uuid": "aa0c1364-4314-4a14-9d7e-1b89d4fa380f", 00:19:20.547 "strip_size_kb": 0, 00:19:20.547 "state": "online", 00:19:20.547 "raid_level": "raid1", 00:19:20.547 "superblock": true, 00:19:20.547 "num_base_bdevs": 2, 00:19:20.547 "num_base_bdevs_discovered": 2, 00:19:20.547 "num_base_bdevs_operational": 2, 00:19:20.547 "base_bdevs_list": [ 00:19:20.547 { 00:19:20.547 "name": "pt1", 00:19:20.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:20.547 "is_configured": true, 00:19:20.547 "data_offset": 256, 00:19:20.547 "data_size": 7936 00:19:20.547 }, 00:19:20.547 { 00:19:20.547 "name": "pt2", 00:19:20.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.547 "is_configured": true, 00:19:20.547 "data_offset": 256, 00:19:20.547 "data_size": 7936 00:19:20.547 } 00:19:20.547 ] 00:19:20.547 }' 00:19:20.547 20:12:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.547 20:12:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.805 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:20.805 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:20.805 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:20.805 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:20.805 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:20.805 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:20.805 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:20.805 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:20.805 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.805 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.805 [2024-12-05 20:12:22.211618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:20.805 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:21.065 "name": "raid_bdev1", 00:19:21.065 "aliases": [ 00:19:21.065 "aa0c1364-4314-4a14-9d7e-1b89d4fa380f" 00:19:21.065 ], 00:19:21.065 "product_name": "Raid Volume", 00:19:21.065 "block_size": 4096, 00:19:21.065 "num_blocks": 7936, 00:19:21.065 "uuid": "aa0c1364-4314-4a14-9d7e-1b89d4fa380f", 00:19:21.065 "assigned_rate_limits": { 00:19:21.065 "rw_ios_per_sec": 0, 00:19:21.065 "rw_mbytes_per_sec": 0, 00:19:21.065 "r_mbytes_per_sec": 0, 00:19:21.065 "w_mbytes_per_sec": 0 00:19:21.065 }, 00:19:21.065 "claimed": false, 00:19:21.065 "zoned": false, 00:19:21.065 "supported_io_types": { 00:19:21.065 "read": true, 00:19:21.065 "write": true, 00:19:21.065 "unmap": false, 00:19:21.065 "flush": false, 00:19:21.065 "reset": true, 00:19:21.065 "nvme_admin": false, 00:19:21.065 "nvme_io": false, 00:19:21.065 "nvme_io_md": false, 00:19:21.065 "write_zeroes": true, 00:19:21.065 "zcopy": false, 00:19:21.065 "get_zone_info": false, 00:19:21.065 "zone_management": false, 00:19:21.065 "zone_append": false, 00:19:21.065 "compare": false, 00:19:21.065 "compare_and_write": false, 00:19:21.065 "abort": false, 00:19:21.065 "seek_hole": false, 00:19:21.065 "seek_data": false, 00:19:21.065 "copy": false, 00:19:21.065 "nvme_iov_md": false 00:19:21.065 }, 00:19:21.065 "memory_domains": [ 00:19:21.065 { 00:19:21.065 "dma_device_id": "system", 00:19:21.065 "dma_device_type": 1 00:19:21.065 }, 00:19:21.065 { 00:19:21.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.065 "dma_device_type": 2 00:19:21.065 }, 00:19:21.065 { 00:19:21.065 "dma_device_id": "system", 00:19:21.065 "dma_device_type": 1 00:19:21.065 }, 00:19:21.065 { 00:19:21.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.065 "dma_device_type": 2 00:19:21.065 } 00:19:21.065 ], 00:19:21.065 "driver_specific": { 00:19:21.065 "raid": { 00:19:21.065 "uuid": "aa0c1364-4314-4a14-9d7e-1b89d4fa380f", 00:19:21.065 "strip_size_kb": 0, 00:19:21.065 "state": "online", 00:19:21.065 "raid_level": "raid1", 00:19:21.065 "superblock": true, 00:19:21.065 "num_base_bdevs": 2, 00:19:21.065 "num_base_bdevs_discovered": 2, 00:19:21.065 "num_base_bdevs_operational": 2, 00:19:21.065 "base_bdevs_list": [ 00:19:21.065 { 00:19:21.065 "name": "pt1", 00:19:21.065 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:21.065 "is_configured": true, 00:19:21.065 "data_offset": 256, 00:19:21.065 "data_size": 7936 00:19:21.065 }, 00:19:21.065 { 00:19:21.065 "name": "pt2", 00:19:21.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.065 "is_configured": true, 00:19:21.065 "data_offset": 256, 00:19:21.065 "data_size": 7936 00:19:21.065 } 00:19:21.065 ] 00:19:21.065 } 00:19:21.065 } 00:19:21.065 }' 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:21.065 pt2' 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.065 [2024-12-05 20:12:22.463196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.065 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' aa0c1364-4314-4a14-9d7e-1b89d4fa380f '!=' aa0c1364-4314-4a14-9d7e-1b89d4fa380f ']' 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.325 [2024-12-05 20:12:22.510925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.325 "name": "raid_bdev1", 00:19:21.325 "uuid": "aa0c1364-4314-4a14-9d7e-1b89d4fa380f", 00:19:21.325 "strip_size_kb": 0, 00:19:21.325 "state": "online", 00:19:21.325 "raid_level": "raid1", 00:19:21.325 "superblock": true, 00:19:21.325 "num_base_bdevs": 2, 00:19:21.325 "num_base_bdevs_discovered": 1, 00:19:21.325 "num_base_bdevs_operational": 1, 00:19:21.325 "base_bdevs_list": [ 00:19:21.325 { 00:19:21.325 "name": null, 00:19:21.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.325 "is_configured": false, 00:19:21.325 "data_offset": 0, 00:19:21.325 "data_size": 7936 00:19:21.325 }, 00:19:21.325 { 00:19:21.325 "name": "pt2", 00:19:21.325 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.325 "is_configured": true, 00:19:21.325 "data_offset": 256, 00:19:21.325 "data_size": 7936 00:19:21.325 } 00:19:21.325 ] 00:19:21.325 }' 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.325 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.585 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:21.585 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.585 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.585 [2024-12-05 20:12:22.942145] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:21.585 [2024-12-05 20:12:22.942209] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.585 [2024-12-05 20:12:22.942281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.585 [2024-12-05 20:12:22.942342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:21.585 [2024-12-05 20:12:22.942395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:21.585 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.585 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.585 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.585 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:21.585 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.585 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.585 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:21.585 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:21.585 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:21.585 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:21.585 20:12:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:21.585 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.585 20:12:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.585 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.585 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:21.585 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:21.585 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:21.585 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:21.585 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:19:21.585 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:21.585 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.585 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.585 [2024-12-05 20:12:23.014019] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:21.585 [2024-12-05 20:12:23.014068] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.585 [2024-12-05 20:12:23.014093] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:21.585 [2024-12-05 20:12:23.014103] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.585 [2024-12-05 20:12:23.016214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.585 [2024-12-05 20:12:23.016302] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:21.585 [2024-12-05 20:12:23.016376] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:21.585 [2024-12-05 20:12:23.016429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:21.585 [2024-12-05 20:12:23.016529] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:21.585 [2024-12-05 20:12:23.016541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:21.585 [2024-12-05 20:12:23.016782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:21.585 [2024-12-05 20:12:23.016968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:21.585 [2024-12-05 20:12:23.016979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:21.585 [2024-12-05 20:12:23.017122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.845 pt2 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.845 "name": "raid_bdev1", 00:19:21.845 "uuid": "aa0c1364-4314-4a14-9d7e-1b89d4fa380f", 00:19:21.845 "strip_size_kb": 0, 00:19:21.845 "state": "online", 00:19:21.845 "raid_level": "raid1", 00:19:21.845 "superblock": true, 00:19:21.845 "num_base_bdevs": 2, 00:19:21.845 "num_base_bdevs_discovered": 1, 00:19:21.845 "num_base_bdevs_operational": 1, 00:19:21.845 "base_bdevs_list": [ 00:19:21.845 { 00:19:21.845 "name": null, 00:19:21.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.845 "is_configured": false, 00:19:21.845 "data_offset": 256, 00:19:21.845 "data_size": 7936 00:19:21.845 }, 00:19:21.845 { 00:19:21.845 "name": "pt2", 00:19:21.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.845 "is_configured": true, 00:19:21.845 "data_offset": 256, 00:19:21.845 "data_size": 7936 00:19:21.845 } 00:19:21.845 ] 00:19:21.845 }' 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.845 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.105 [2024-12-05 20:12:23.445264] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:22.105 [2024-12-05 20:12:23.445338] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:22.105 [2024-12-05 20:12:23.445420] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:22.105 [2024-12-05 20:12:23.445479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:22.105 [2024-12-05 20:12:23.445523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.105 [2024-12-05 20:12:23.505170] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:22.105 [2024-12-05 20:12:23.505261] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.105 [2024-12-05 20:12:23.505303] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:22.105 [2024-12-05 20:12:23.505332] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.105 [2024-12-05 20:12:23.507390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.105 [2024-12-05 20:12:23.507461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:22.105 [2024-12-05 20:12:23.507575] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:22.105 [2024-12-05 20:12:23.507643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:22.105 [2024-12-05 20:12:23.507807] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:22.105 [2024-12-05 20:12:23.507863] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:22.105 [2024-12-05 20:12:23.507981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:22.105 [2024-12-05 20:12:23.508086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:22.105 [2024-12-05 20:12:23.508189] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:22.105 [2024-12-05 20:12:23.508225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:22.105 [2024-12-05 20:12:23.508484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:22.105 [2024-12-05 20:12:23.508659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:22.105 [2024-12-05 20:12:23.508713] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:22.105 [2024-12-05 20:12:23.508936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.105 pt1 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.105 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.365 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.365 "name": "raid_bdev1", 00:19:22.365 "uuid": "aa0c1364-4314-4a14-9d7e-1b89d4fa380f", 00:19:22.365 "strip_size_kb": 0, 00:19:22.365 "state": "online", 00:19:22.365 "raid_level": "raid1", 00:19:22.365 "superblock": true, 00:19:22.365 "num_base_bdevs": 2, 00:19:22.365 "num_base_bdevs_discovered": 1, 00:19:22.365 "num_base_bdevs_operational": 1, 00:19:22.365 "base_bdevs_list": [ 00:19:22.365 { 00:19:22.365 "name": null, 00:19:22.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.365 "is_configured": false, 00:19:22.365 "data_offset": 256, 00:19:22.365 "data_size": 7936 00:19:22.365 }, 00:19:22.365 { 00:19:22.365 "name": "pt2", 00:19:22.365 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.365 "is_configured": true, 00:19:22.365 "data_offset": 256, 00:19:22.365 "data_size": 7936 00:19:22.365 } 00:19:22.365 ] 00:19:22.365 }' 00:19:22.365 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.365 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.625 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:22.625 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.625 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:22.625 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.625 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.625 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:22.625 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:22.625 20:12:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:22.625 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.625 20:12:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.625 [2024-12-05 20:12:23.992529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:22.625 20:12:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.625 20:12:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' aa0c1364-4314-4a14-9d7e-1b89d4fa380f '!=' aa0c1364-4314-4a14-9d7e-1b89d4fa380f ']' 00:19:22.625 20:12:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86251 00:19:22.625 20:12:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86251 ']' 00:19:22.625 20:12:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86251 00:19:22.625 20:12:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:19:22.625 20:12:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.625 20:12:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86251 00:19:22.625 20:12:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.625 killing process with pid 86251 00:19:22.625 20:12:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.625 20:12:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86251' 00:19:22.625 20:12:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86251 00:19:22.625 [2024-12-05 20:12:24.058851] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:22.625 [2024-12-05 20:12:24.058927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:22.625 [2024-12-05 20:12:24.058963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:22.625 [2024-12-05 20:12:24.058975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:22.625 20:12:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86251 00:19:22.888 [2024-12-05 20:12:24.254225] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:24.274 20:12:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:19:24.274 ************************************ 00:19:24.275 END TEST raid_superblock_test_4k 00:19:24.275 ************************************ 00:19:24.275 00:19:24.275 real 0m6.001s 00:19:24.275 user 0m9.105s 00:19:24.275 sys 0m1.106s 00:19:24.275 20:12:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.275 20:12:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.275 20:12:25 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:19:24.275 20:12:25 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:19:24.275 20:12:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:24.275 20:12:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.275 20:12:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:24.275 ************************************ 00:19:24.275 START TEST raid_rebuild_test_sb_4k 00:19:24.275 ************************************ 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86574 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86574 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86574 ']' 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.275 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.275 [2024-12-05 20:12:25.513541] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:19:24.275 [2024-12-05 20:12:25.513763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:19:24.275 Zero copy mechanism will not be used. 00:19:24.275 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86574 ] 00:19:24.275 [2024-12-05 20:12:25.687152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.533 [2024-12-05 20:12:25.792186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.533 [2024-12-05 20:12:25.967900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:24.533 [2024-12-05 20:12:25.968035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:25.102 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.102 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.103 BaseBdev1_malloc 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.103 [2024-12-05 20:12:26.367226] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:25.103 [2024-12-05 20:12:26.367284] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.103 [2024-12-05 20:12:26.367306] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:25.103 [2024-12-05 20:12:26.367316] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.103 [2024-12-05 20:12:26.369334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.103 [2024-12-05 20:12:26.369376] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:25.103 BaseBdev1 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.103 BaseBdev2_malloc 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.103 [2024-12-05 20:12:26.418579] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:25.103 [2024-12-05 20:12:26.418701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.103 [2024-12-05 20:12:26.418729] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:25.103 [2024-12-05 20:12:26.418739] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.103 [2024-12-05 20:12:26.420744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.103 [2024-12-05 20:12:26.420793] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:25.103 BaseBdev2 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.103 spare_malloc 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.103 spare_delay 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.103 [2024-12-05 20:12:26.517489] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:25.103 [2024-12-05 20:12:26.517613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.103 [2024-12-05 20:12:26.517636] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:25.103 [2024-12-05 20:12:26.517647] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.103 [2024-12-05 20:12:26.519739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.103 [2024-12-05 20:12:26.519780] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:25.103 spare 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.103 [2024-12-05 20:12:26.529524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:25.103 [2024-12-05 20:12:26.531211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:25.103 [2024-12-05 20:12:26.531385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:25.103 [2024-12-05 20:12:26.531400] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:25.103 [2024-12-05 20:12:26.531616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:25.103 [2024-12-05 20:12:26.531782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:25.103 [2024-12-05 20:12:26.531791] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:25.103 [2024-12-05 20:12:26.531940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.103 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.363 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.363 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.363 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.363 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.363 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.363 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.363 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.363 "name": "raid_bdev1", 00:19:25.363 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:25.363 "strip_size_kb": 0, 00:19:25.363 "state": "online", 00:19:25.363 "raid_level": "raid1", 00:19:25.363 "superblock": true, 00:19:25.363 "num_base_bdevs": 2, 00:19:25.363 "num_base_bdevs_discovered": 2, 00:19:25.363 "num_base_bdevs_operational": 2, 00:19:25.363 "base_bdevs_list": [ 00:19:25.363 { 00:19:25.363 "name": "BaseBdev1", 00:19:25.363 "uuid": "b646acaf-dcb0-59a4-91c2-0ba1ffaec236", 00:19:25.363 "is_configured": true, 00:19:25.363 "data_offset": 256, 00:19:25.363 "data_size": 7936 00:19:25.363 }, 00:19:25.363 { 00:19:25.363 "name": "BaseBdev2", 00:19:25.363 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:25.363 "is_configured": true, 00:19:25.363 "data_offset": 256, 00:19:25.363 "data_size": 7936 00:19:25.363 } 00:19:25.363 ] 00:19:25.363 }' 00:19:25.363 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.363 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.622 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:25.622 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:25.622 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.622 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.622 [2024-12-05 20:12:27.001014] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:25.622 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.622 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:25.622 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.622 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.622 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.622 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:25.622 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:25.883 [2024-12-05 20:12:27.252379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:25.883 /dev/nbd0 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:25.883 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:26.143 1+0 records in 00:19:26.143 1+0 records out 00:19:26.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516835 s, 7.9 MB/s 00:19:26.143 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.143 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:26.143 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.143 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:26.143 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:26.143 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:26.143 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:26.143 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:26.143 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:26.143 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:26.713 7936+0 records in 00:19:26.713 7936+0 records out 00:19:26.713 32505856 bytes (33 MB, 31 MiB) copied, 0.619165 s, 52.5 MB/s 00:19:26.713 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:26.713 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:26.713 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:26.713 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:26.713 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:26.713 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:26.713 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:26.972 [2024-12-05 20:12:28.164038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.972 [2024-12-05 20:12:28.180103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:26.972 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:26.973 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.973 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.973 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.973 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.973 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.973 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.973 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.973 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.973 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.973 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.973 "name": "raid_bdev1", 00:19:26.973 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:26.973 "strip_size_kb": 0, 00:19:26.973 "state": "online", 00:19:26.973 "raid_level": "raid1", 00:19:26.973 "superblock": true, 00:19:26.973 "num_base_bdevs": 2, 00:19:26.973 "num_base_bdevs_discovered": 1, 00:19:26.973 "num_base_bdevs_operational": 1, 00:19:26.973 "base_bdevs_list": [ 00:19:26.973 { 00:19:26.973 "name": null, 00:19:26.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.973 "is_configured": false, 00:19:26.973 "data_offset": 0, 00:19:26.973 "data_size": 7936 00:19:26.973 }, 00:19:26.973 { 00:19:26.973 "name": "BaseBdev2", 00:19:26.973 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:26.973 "is_configured": true, 00:19:26.973 "data_offset": 256, 00:19:26.973 "data_size": 7936 00:19:26.973 } 00:19:26.973 ] 00:19:26.973 }' 00:19:26.973 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.973 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.232 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:27.232 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.232 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.232 [2024-12-05 20:12:28.643308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:27.232 [2024-12-05 20:12:28.660319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:27.232 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.232 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:27.232 [2024-12-05 20:12:28.662186] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:28.613 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:28.613 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:28.613 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:28.613 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:28.613 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:28.613 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.613 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.614 "name": "raid_bdev1", 00:19:28.614 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:28.614 "strip_size_kb": 0, 00:19:28.614 "state": "online", 00:19:28.614 "raid_level": "raid1", 00:19:28.614 "superblock": true, 00:19:28.614 "num_base_bdevs": 2, 00:19:28.614 "num_base_bdevs_discovered": 2, 00:19:28.614 "num_base_bdevs_operational": 2, 00:19:28.614 "process": { 00:19:28.614 "type": "rebuild", 00:19:28.614 "target": "spare", 00:19:28.614 "progress": { 00:19:28.614 "blocks": 2560, 00:19:28.614 "percent": 32 00:19:28.614 } 00:19:28.614 }, 00:19:28.614 "base_bdevs_list": [ 00:19:28.614 { 00:19:28.614 "name": "spare", 00:19:28.614 "uuid": "af0b3c1a-bab1-58b3-834d-c5965e1878cd", 00:19:28.614 "is_configured": true, 00:19:28.614 "data_offset": 256, 00:19:28.614 "data_size": 7936 00:19:28.614 }, 00:19:28.614 { 00:19:28.614 "name": "BaseBdev2", 00:19:28.614 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:28.614 "is_configured": true, 00:19:28.614 "data_offset": 256, 00:19:28.614 "data_size": 7936 00:19:28.614 } 00:19:28.614 ] 00:19:28.614 }' 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.614 [2024-12-05 20:12:29.801995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:28.614 [2024-12-05 20:12:29.866904] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:28.614 [2024-12-05 20:12:29.866960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.614 [2024-12-05 20:12:29.866974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:28.614 [2024-12-05 20:12:29.866983] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.614 "name": "raid_bdev1", 00:19:28.614 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:28.614 "strip_size_kb": 0, 00:19:28.614 "state": "online", 00:19:28.614 "raid_level": "raid1", 00:19:28.614 "superblock": true, 00:19:28.614 "num_base_bdevs": 2, 00:19:28.614 "num_base_bdevs_discovered": 1, 00:19:28.614 "num_base_bdevs_operational": 1, 00:19:28.614 "base_bdevs_list": [ 00:19:28.614 { 00:19:28.614 "name": null, 00:19:28.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.614 "is_configured": false, 00:19:28.614 "data_offset": 0, 00:19:28.614 "data_size": 7936 00:19:28.614 }, 00:19:28.614 { 00:19:28.614 "name": "BaseBdev2", 00:19:28.614 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:28.614 "is_configured": true, 00:19:28.614 "data_offset": 256, 00:19:28.614 "data_size": 7936 00:19:28.614 } 00:19:28.614 ] 00:19:28.614 }' 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.614 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.200 "name": "raid_bdev1", 00:19:29.200 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:29.200 "strip_size_kb": 0, 00:19:29.200 "state": "online", 00:19:29.200 "raid_level": "raid1", 00:19:29.200 "superblock": true, 00:19:29.200 "num_base_bdevs": 2, 00:19:29.200 "num_base_bdevs_discovered": 1, 00:19:29.200 "num_base_bdevs_operational": 1, 00:19:29.200 "base_bdevs_list": [ 00:19:29.200 { 00:19:29.200 "name": null, 00:19:29.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.200 "is_configured": false, 00:19:29.200 "data_offset": 0, 00:19:29.200 "data_size": 7936 00:19:29.200 }, 00:19:29.200 { 00:19:29.200 "name": "BaseBdev2", 00:19:29.200 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:29.200 "is_configured": true, 00:19:29.200 "data_offset": 256, 00:19:29.200 "data_size": 7936 00:19:29.200 } 00:19:29.200 ] 00:19:29.200 }' 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.200 [2024-12-05 20:12:30.518956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:29.200 [2024-12-05 20:12:30.534390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.200 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:29.200 [2024-12-05 20:12:30.536180] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:30.177 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.177 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.177 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:30.177 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:30.177 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.177 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.177 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.177 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.177 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.177 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.177 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.177 "name": "raid_bdev1", 00:19:30.177 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:30.177 "strip_size_kb": 0, 00:19:30.177 "state": "online", 00:19:30.177 "raid_level": "raid1", 00:19:30.177 "superblock": true, 00:19:30.177 "num_base_bdevs": 2, 00:19:30.177 "num_base_bdevs_discovered": 2, 00:19:30.177 "num_base_bdevs_operational": 2, 00:19:30.177 "process": { 00:19:30.177 "type": "rebuild", 00:19:30.177 "target": "spare", 00:19:30.177 "progress": { 00:19:30.177 "blocks": 2560, 00:19:30.177 "percent": 32 00:19:30.177 } 00:19:30.177 }, 00:19:30.177 "base_bdevs_list": [ 00:19:30.177 { 00:19:30.177 "name": "spare", 00:19:30.177 "uuid": "af0b3c1a-bab1-58b3-834d-c5965e1878cd", 00:19:30.177 "is_configured": true, 00:19:30.177 "data_offset": 256, 00:19:30.177 "data_size": 7936 00:19:30.177 }, 00:19:30.177 { 00:19:30.177 "name": "BaseBdev2", 00:19:30.177 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:30.177 "is_configured": true, 00:19:30.177 "data_offset": 256, 00:19:30.177 "data_size": 7936 00:19:30.177 } 00:19:30.177 ] 00:19:30.177 }' 00:19:30.177 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.436 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.436 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:30.437 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=673 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.437 "name": "raid_bdev1", 00:19:30.437 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:30.437 "strip_size_kb": 0, 00:19:30.437 "state": "online", 00:19:30.437 "raid_level": "raid1", 00:19:30.437 "superblock": true, 00:19:30.437 "num_base_bdevs": 2, 00:19:30.437 "num_base_bdevs_discovered": 2, 00:19:30.437 "num_base_bdevs_operational": 2, 00:19:30.437 "process": { 00:19:30.437 "type": "rebuild", 00:19:30.437 "target": "spare", 00:19:30.437 "progress": { 00:19:30.437 "blocks": 2816, 00:19:30.437 "percent": 35 00:19:30.437 } 00:19:30.437 }, 00:19:30.437 "base_bdevs_list": [ 00:19:30.437 { 00:19:30.437 "name": "spare", 00:19:30.437 "uuid": "af0b3c1a-bab1-58b3-834d-c5965e1878cd", 00:19:30.437 "is_configured": true, 00:19:30.437 "data_offset": 256, 00:19:30.437 "data_size": 7936 00:19:30.437 }, 00:19:30.437 { 00:19:30.437 "name": "BaseBdev2", 00:19:30.437 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:30.437 "is_configured": true, 00:19:30.437 "data_offset": 256, 00:19:30.437 "data_size": 7936 00:19:30.437 } 00:19:30.437 ] 00:19:30.437 }' 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.437 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:31.818 20:12:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:31.818 20:12:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:31.818 20:12:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.818 20:12:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:31.818 20:12:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:31.818 20:12:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.818 20:12:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.818 20:12:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.818 20:12:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.818 20:12:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.818 20:12:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.818 20:12:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.818 "name": "raid_bdev1", 00:19:31.818 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:31.818 "strip_size_kb": 0, 00:19:31.818 "state": "online", 00:19:31.818 "raid_level": "raid1", 00:19:31.818 "superblock": true, 00:19:31.818 "num_base_bdevs": 2, 00:19:31.818 "num_base_bdevs_discovered": 2, 00:19:31.818 "num_base_bdevs_operational": 2, 00:19:31.818 "process": { 00:19:31.818 "type": "rebuild", 00:19:31.818 "target": "spare", 00:19:31.818 "progress": { 00:19:31.818 "blocks": 5632, 00:19:31.818 "percent": 70 00:19:31.818 } 00:19:31.818 }, 00:19:31.818 "base_bdevs_list": [ 00:19:31.818 { 00:19:31.818 "name": "spare", 00:19:31.818 "uuid": "af0b3c1a-bab1-58b3-834d-c5965e1878cd", 00:19:31.818 "is_configured": true, 00:19:31.818 "data_offset": 256, 00:19:31.818 "data_size": 7936 00:19:31.818 }, 00:19:31.818 { 00:19:31.818 "name": "BaseBdev2", 00:19:31.818 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:31.818 "is_configured": true, 00:19:31.818 "data_offset": 256, 00:19:31.818 "data_size": 7936 00:19:31.818 } 00:19:31.818 ] 00:19:31.818 }' 00:19:31.818 20:12:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.818 20:12:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:31.818 20:12:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.818 20:12:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:31.818 20:12:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:32.387 [2024-12-05 20:12:33.648236] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:32.387 [2024-12-05 20:12:33.648302] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:32.387 [2024-12-05 20:12:33.648393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.646 20:12:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:32.646 20:12:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:32.646 20:12:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.647 20:12:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:32.647 20:12:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:32.647 20:12:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.647 20:12:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.647 20:12:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.647 20:12:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.647 20:12:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:32.647 20:12:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.647 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.647 "name": "raid_bdev1", 00:19:32.647 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:32.647 "strip_size_kb": 0, 00:19:32.647 "state": "online", 00:19:32.647 "raid_level": "raid1", 00:19:32.647 "superblock": true, 00:19:32.647 "num_base_bdevs": 2, 00:19:32.647 "num_base_bdevs_discovered": 2, 00:19:32.647 "num_base_bdevs_operational": 2, 00:19:32.647 "base_bdevs_list": [ 00:19:32.647 { 00:19:32.647 "name": "spare", 00:19:32.647 "uuid": "af0b3c1a-bab1-58b3-834d-c5965e1878cd", 00:19:32.647 "is_configured": true, 00:19:32.647 "data_offset": 256, 00:19:32.647 "data_size": 7936 00:19:32.647 }, 00:19:32.647 { 00:19:32.647 "name": "BaseBdev2", 00:19:32.647 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:32.647 "is_configured": true, 00:19:32.647 "data_offset": 256, 00:19:32.647 "data_size": 7936 00:19:32.647 } 00:19:32.647 ] 00:19:32.647 }' 00:19:32.647 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.647 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.906 "name": "raid_bdev1", 00:19:32.906 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:32.906 "strip_size_kb": 0, 00:19:32.906 "state": "online", 00:19:32.906 "raid_level": "raid1", 00:19:32.906 "superblock": true, 00:19:32.906 "num_base_bdevs": 2, 00:19:32.906 "num_base_bdevs_discovered": 2, 00:19:32.906 "num_base_bdevs_operational": 2, 00:19:32.906 "base_bdevs_list": [ 00:19:32.906 { 00:19:32.906 "name": "spare", 00:19:32.906 "uuid": "af0b3c1a-bab1-58b3-834d-c5965e1878cd", 00:19:32.906 "is_configured": true, 00:19:32.906 "data_offset": 256, 00:19:32.906 "data_size": 7936 00:19:32.906 }, 00:19:32.906 { 00:19:32.906 "name": "BaseBdev2", 00:19:32.906 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:32.906 "is_configured": true, 00:19:32.906 "data_offset": 256, 00:19:32.906 "data_size": 7936 00:19:32.906 } 00:19:32.906 ] 00:19:32.906 }' 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.906 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:32.907 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.907 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.907 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.907 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.907 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.907 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.907 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:32.907 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.907 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.907 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.907 "name": "raid_bdev1", 00:19:32.907 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:32.907 "strip_size_kb": 0, 00:19:32.907 "state": "online", 00:19:32.907 "raid_level": "raid1", 00:19:32.907 "superblock": true, 00:19:32.907 "num_base_bdevs": 2, 00:19:32.907 "num_base_bdevs_discovered": 2, 00:19:32.907 "num_base_bdevs_operational": 2, 00:19:32.907 "base_bdevs_list": [ 00:19:32.907 { 00:19:32.907 "name": "spare", 00:19:32.907 "uuid": "af0b3c1a-bab1-58b3-834d-c5965e1878cd", 00:19:32.907 "is_configured": true, 00:19:32.907 "data_offset": 256, 00:19:32.907 "data_size": 7936 00:19:32.907 }, 00:19:32.907 { 00:19:32.907 "name": "BaseBdev2", 00:19:32.907 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:32.907 "is_configured": true, 00:19:32.907 "data_offset": 256, 00:19:32.907 "data_size": 7936 00:19:32.907 } 00:19:32.907 ] 00:19:32.907 }' 00:19:32.907 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.907 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.476 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:33.476 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.476 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.476 [2024-12-05 20:12:34.696247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:33.476 [2024-12-05 20:12:34.696277] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.476 [2024-12-05 20:12:34.696343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.476 [2024-12-05 20:12:34.696402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:33.477 [2024-12-05 20:12:34.696413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:33.477 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:33.736 /dev/nbd0 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:33.736 1+0 records in 00:19:33.736 1+0 records out 00:19:33.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037302 s, 11.0 MB/s 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:33.736 20:12:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:33.995 /dev/nbd1 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:33.995 1+0 records in 00:19:33.995 1+0 records out 00:19:33.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420386 s, 9.7 MB/s 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:33.995 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:34.255 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:34.255 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:34.255 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:34.255 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:34.255 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:34.255 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:34.255 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:34.255 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:34.255 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:34.255 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.514 [2024-12-05 20:12:35.827513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:34.514 [2024-12-05 20:12:35.827573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.514 [2024-12-05 20:12:35.827596] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:34.514 [2024-12-05 20:12:35.827606] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.514 [2024-12-05 20:12:35.829833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.514 [2024-12-05 20:12:35.829872] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:34.514 [2024-12-05 20:12:35.829967] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:34.514 [2024-12-05 20:12:35.830019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:34.514 [2024-12-05 20:12:35.830193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:34.514 spare 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.514 [2024-12-05 20:12:35.930103] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:34.514 [2024-12-05 20:12:35.930135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:34.514 [2024-12-05 20:12:35.930389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:34.514 [2024-12-05 20:12:35.930581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:34.514 [2024-12-05 20:12:35.930598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:34.514 [2024-12-05 20:12:35.930765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.514 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.773 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.773 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.773 "name": "raid_bdev1", 00:19:34.773 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:34.773 "strip_size_kb": 0, 00:19:34.773 "state": "online", 00:19:34.773 "raid_level": "raid1", 00:19:34.773 "superblock": true, 00:19:34.773 "num_base_bdevs": 2, 00:19:34.773 "num_base_bdevs_discovered": 2, 00:19:34.773 "num_base_bdevs_operational": 2, 00:19:34.773 "base_bdevs_list": [ 00:19:34.773 { 00:19:34.773 "name": "spare", 00:19:34.773 "uuid": "af0b3c1a-bab1-58b3-834d-c5965e1878cd", 00:19:34.773 "is_configured": true, 00:19:34.773 "data_offset": 256, 00:19:34.773 "data_size": 7936 00:19:34.773 }, 00:19:34.773 { 00:19:34.773 "name": "BaseBdev2", 00:19:34.773 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:34.773 "is_configured": true, 00:19:34.773 "data_offset": 256, 00:19:34.773 "data_size": 7936 00:19:34.773 } 00:19:34.773 ] 00:19:34.773 }' 00:19:34.773 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.773 20:12:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.032 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:35.032 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.032 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:35.032 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:35.032 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.032 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.032 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.032 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.032 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.032 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.032 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.032 "name": "raid_bdev1", 00:19:35.032 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:35.032 "strip_size_kb": 0, 00:19:35.032 "state": "online", 00:19:35.032 "raid_level": "raid1", 00:19:35.032 "superblock": true, 00:19:35.032 "num_base_bdevs": 2, 00:19:35.032 "num_base_bdevs_discovered": 2, 00:19:35.032 "num_base_bdevs_operational": 2, 00:19:35.032 "base_bdevs_list": [ 00:19:35.032 { 00:19:35.032 "name": "spare", 00:19:35.032 "uuid": "af0b3c1a-bab1-58b3-834d-c5965e1878cd", 00:19:35.032 "is_configured": true, 00:19:35.032 "data_offset": 256, 00:19:35.032 "data_size": 7936 00:19:35.032 }, 00:19:35.032 { 00:19:35.032 "name": "BaseBdev2", 00:19:35.032 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:35.032 "is_configured": true, 00:19:35.032 "data_offset": 256, 00:19:35.032 "data_size": 7936 00:19:35.032 } 00:19:35.032 ] 00:19:35.032 }' 00:19:35.032 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.290 [2024-12-05 20:12:36.622167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.290 "name": "raid_bdev1", 00:19:35.290 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:35.290 "strip_size_kb": 0, 00:19:35.290 "state": "online", 00:19:35.290 "raid_level": "raid1", 00:19:35.290 "superblock": true, 00:19:35.290 "num_base_bdevs": 2, 00:19:35.290 "num_base_bdevs_discovered": 1, 00:19:35.290 "num_base_bdevs_operational": 1, 00:19:35.290 "base_bdevs_list": [ 00:19:35.290 { 00:19:35.290 "name": null, 00:19:35.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.290 "is_configured": false, 00:19:35.290 "data_offset": 0, 00:19:35.290 "data_size": 7936 00:19:35.290 }, 00:19:35.290 { 00:19:35.290 "name": "BaseBdev2", 00:19:35.290 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:35.290 "is_configured": true, 00:19:35.290 "data_offset": 256, 00:19:35.290 "data_size": 7936 00:19:35.290 } 00:19:35.290 ] 00:19:35.290 }' 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.290 20:12:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.872 20:12:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:35.872 20:12:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.872 20:12:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.872 [2024-12-05 20:12:37.045492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:35.872 [2024-12-05 20:12:37.045681] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:35.872 [2024-12-05 20:12:37.045707] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:35.872 [2024-12-05 20:12:37.045739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:35.872 [2024-12-05 20:12:37.060475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:35.872 20:12:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.872 20:12:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:35.872 [2024-12-05 20:12:37.062327] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:36.810 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.810 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.810 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:36.810 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:36.810 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.810 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.810 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.810 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.810 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.810 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.810 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.810 "name": "raid_bdev1", 00:19:36.810 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:36.810 "strip_size_kb": 0, 00:19:36.810 "state": "online", 00:19:36.810 "raid_level": "raid1", 00:19:36.810 "superblock": true, 00:19:36.810 "num_base_bdevs": 2, 00:19:36.810 "num_base_bdevs_discovered": 2, 00:19:36.810 "num_base_bdevs_operational": 2, 00:19:36.811 "process": { 00:19:36.811 "type": "rebuild", 00:19:36.811 "target": "spare", 00:19:36.811 "progress": { 00:19:36.811 "blocks": 2560, 00:19:36.811 "percent": 32 00:19:36.811 } 00:19:36.811 }, 00:19:36.811 "base_bdevs_list": [ 00:19:36.811 { 00:19:36.811 "name": "spare", 00:19:36.811 "uuid": "af0b3c1a-bab1-58b3-834d-c5965e1878cd", 00:19:36.811 "is_configured": true, 00:19:36.811 "data_offset": 256, 00:19:36.811 "data_size": 7936 00:19:36.811 }, 00:19:36.811 { 00:19:36.811 "name": "BaseBdev2", 00:19:36.811 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:36.811 "is_configured": true, 00:19:36.811 "data_offset": 256, 00:19:36.811 "data_size": 7936 00:19:36.811 } 00:19:36.811 ] 00:19:36.811 }' 00:19:36.811 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.811 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:36.811 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.811 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.811 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:36.811 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.811 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.811 [2024-12-05 20:12:38.230214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:37.070 [2024-12-05 20:12:38.267090] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:37.070 [2024-12-05 20:12:38.267147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.070 [2024-12-05 20:12:38.267161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:37.070 [2024-12-05 20:12:38.267170] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:37.070 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.070 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:37.070 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.070 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.070 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.070 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.070 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:37.070 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.071 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.071 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.071 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.071 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.071 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.071 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.071 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.071 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.071 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.071 "name": "raid_bdev1", 00:19:37.071 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:37.071 "strip_size_kb": 0, 00:19:37.071 "state": "online", 00:19:37.071 "raid_level": "raid1", 00:19:37.071 "superblock": true, 00:19:37.071 "num_base_bdevs": 2, 00:19:37.071 "num_base_bdevs_discovered": 1, 00:19:37.071 "num_base_bdevs_operational": 1, 00:19:37.071 "base_bdevs_list": [ 00:19:37.071 { 00:19:37.071 "name": null, 00:19:37.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.071 "is_configured": false, 00:19:37.071 "data_offset": 0, 00:19:37.071 "data_size": 7936 00:19:37.071 }, 00:19:37.071 { 00:19:37.071 "name": "BaseBdev2", 00:19:37.071 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:37.071 "is_configured": true, 00:19:37.071 "data_offset": 256, 00:19:37.071 "data_size": 7936 00:19:37.071 } 00:19:37.071 ] 00:19:37.071 }' 00:19:37.071 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.071 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.639 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:37.639 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.639 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.639 [2024-12-05 20:12:38.775772] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:37.639 [2024-12-05 20:12:38.775829] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.639 [2024-12-05 20:12:38.775848] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:37.639 [2024-12-05 20:12:38.775858] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.639 [2024-12-05 20:12:38.776315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.639 [2024-12-05 20:12:38.776346] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:37.639 [2024-12-05 20:12:38.776426] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:37.639 [2024-12-05 20:12:38.776446] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:37.639 [2024-12-05 20:12:38.776454] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:37.639 [2024-12-05 20:12:38.776478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:37.639 [2024-12-05 20:12:38.791572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:37.639 spare 00:19:37.639 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.639 20:12:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:37.639 [2024-12-05 20:12:38.793384] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.576 "name": "raid_bdev1", 00:19:38.576 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:38.576 "strip_size_kb": 0, 00:19:38.576 "state": "online", 00:19:38.576 "raid_level": "raid1", 00:19:38.576 "superblock": true, 00:19:38.576 "num_base_bdevs": 2, 00:19:38.576 "num_base_bdevs_discovered": 2, 00:19:38.576 "num_base_bdevs_operational": 2, 00:19:38.576 "process": { 00:19:38.576 "type": "rebuild", 00:19:38.576 "target": "spare", 00:19:38.576 "progress": { 00:19:38.576 "blocks": 2560, 00:19:38.576 "percent": 32 00:19:38.576 } 00:19:38.576 }, 00:19:38.576 "base_bdevs_list": [ 00:19:38.576 { 00:19:38.576 "name": "spare", 00:19:38.576 "uuid": "af0b3c1a-bab1-58b3-834d-c5965e1878cd", 00:19:38.576 "is_configured": true, 00:19:38.576 "data_offset": 256, 00:19:38.576 "data_size": 7936 00:19:38.576 }, 00:19:38.576 { 00:19:38.576 "name": "BaseBdev2", 00:19:38.576 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:38.576 "is_configured": true, 00:19:38.576 "data_offset": 256, 00:19:38.576 "data_size": 7936 00:19:38.576 } 00:19:38.576 ] 00:19:38.576 }' 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.576 20:12:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:38.576 [2024-12-05 20:12:39.957190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:38.576 [2024-12-05 20:12:39.998110] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:38.576 [2024-12-05 20:12:39.998160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.576 [2024-12-05 20:12:39.998176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:38.576 [2024-12-05 20:12:39.998183] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.835 "name": "raid_bdev1", 00:19:38.835 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:38.835 "strip_size_kb": 0, 00:19:38.835 "state": "online", 00:19:38.835 "raid_level": "raid1", 00:19:38.835 "superblock": true, 00:19:38.835 "num_base_bdevs": 2, 00:19:38.835 "num_base_bdevs_discovered": 1, 00:19:38.835 "num_base_bdevs_operational": 1, 00:19:38.835 "base_bdevs_list": [ 00:19:38.835 { 00:19:38.835 "name": null, 00:19:38.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.835 "is_configured": false, 00:19:38.835 "data_offset": 0, 00:19:38.835 "data_size": 7936 00:19:38.835 }, 00:19:38.835 { 00:19:38.835 "name": "BaseBdev2", 00:19:38.835 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:38.835 "is_configured": true, 00:19:38.835 "data_offset": 256, 00:19:38.835 "data_size": 7936 00:19:38.835 } 00:19:38.835 ] 00:19:38.835 }' 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.835 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:39.094 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:39.094 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.094 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:39.094 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:39.095 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.095 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.095 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.095 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.095 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:39.095 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.353 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.353 "name": "raid_bdev1", 00:19:39.353 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:39.353 "strip_size_kb": 0, 00:19:39.353 "state": "online", 00:19:39.353 "raid_level": "raid1", 00:19:39.353 "superblock": true, 00:19:39.353 "num_base_bdevs": 2, 00:19:39.353 "num_base_bdevs_discovered": 1, 00:19:39.353 "num_base_bdevs_operational": 1, 00:19:39.353 "base_bdevs_list": [ 00:19:39.353 { 00:19:39.353 "name": null, 00:19:39.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.353 "is_configured": false, 00:19:39.353 "data_offset": 0, 00:19:39.353 "data_size": 7936 00:19:39.353 }, 00:19:39.353 { 00:19:39.353 "name": "BaseBdev2", 00:19:39.353 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:39.353 "is_configured": true, 00:19:39.353 "data_offset": 256, 00:19:39.353 "data_size": 7936 00:19:39.353 } 00:19:39.353 ] 00:19:39.353 }' 00:19:39.353 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.353 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:39.353 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.353 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:39.353 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:39.353 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.353 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:39.353 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.353 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:39.353 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.353 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:39.353 [2024-12-05 20:12:40.654860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:39.353 [2024-12-05 20:12:40.654927] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.353 [2024-12-05 20:12:40.654954] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:39.353 [2024-12-05 20:12:40.654973] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.353 [2024-12-05 20:12:40.655423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.353 [2024-12-05 20:12:40.655447] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:39.353 [2024-12-05 20:12:40.655519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:39.354 [2024-12-05 20:12:40.655537] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:39.354 [2024-12-05 20:12:40.655550] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:39.354 [2024-12-05 20:12:40.655560] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:39.354 BaseBdev1 00:19:39.354 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.354 20:12:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.292 "name": "raid_bdev1", 00:19:40.292 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:40.292 "strip_size_kb": 0, 00:19:40.292 "state": "online", 00:19:40.292 "raid_level": "raid1", 00:19:40.292 "superblock": true, 00:19:40.292 "num_base_bdevs": 2, 00:19:40.292 "num_base_bdevs_discovered": 1, 00:19:40.292 "num_base_bdevs_operational": 1, 00:19:40.292 "base_bdevs_list": [ 00:19:40.292 { 00:19:40.292 "name": null, 00:19:40.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.292 "is_configured": false, 00:19:40.292 "data_offset": 0, 00:19:40.292 "data_size": 7936 00:19:40.292 }, 00:19:40.292 { 00:19:40.292 "name": "BaseBdev2", 00:19:40.292 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:40.292 "is_configured": true, 00:19:40.292 "data_offset": 256, 00:19:40.292 "data_size": 7936 00:19:40.292 } 00:19:40.292 ] 00:19:40.292 }' 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.292 20:12:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:40.862 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:40.862 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.862 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:40.862 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:40.862 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.862 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.862 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.862 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:40.862 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.863 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.863 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.863 "name": "raid_bdev1", 00:19:40.863 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:40.863 "strip_size_kb": 0, 00:19:40.863 "state": "online", 00:19:40.863 "raid_level": "raid1", 00:19:40.863 "superblock": true, 00:19:40.863 "num_base_bdevs": 2, 00:19:40.863 "num_base_bdevs_discovered": 1, 00:19:40.863 "num_base_bdevs_operational": 1, 00:19:40.863 "base_bdevs_list": [ 00:19:40.863 { 00:19:40.863 "name": null, 00:19:40.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.863 "is_configured": false, 00:19:40.863 "data_offset": 0, 00:19:40.863 "data_size": 7936 00:19:40.863 }, 00:19:40.863 { 00:19:40.863 "name": "BaseBdev2", 00:19:40.863 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:40.863 "is_configured": true, 00:19:40.863 "data_offset": 256, 00:19:40.863 "data_size": 7936 00:19:40.863 } 00:19:40.863 ] 00:19:40.863 }' 00:19:40.863 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.863 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:40.863 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.123 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:41.123 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:41.123 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:41.123 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:41.123 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:41.123 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.123 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:41.123 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.123 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:41.123 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.123 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.123 [2024-12-05 20:12:42.316101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:41.123 [2024-12-05 20:12:42.316259] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:41.123 [2024-12-05 20:12:42.316279] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:41.123 request: 00:19:41.123 { 00:19:41.123 "base_bdev": "BaseBdev1", 00:19:41.123 "raid_bdev": "raid_bdev1", 00:19:41.123 "method": "bdev_raid_add_base_bdev", 00:19:41.123 "req_id": 1 00:19:41.123 } 00:19:41.123 Got JSON-RPC error response 00:19:41.123 response: 00:19:41.123 { 00:19:41.123 "code": -22, 00:19:41.123 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:41.123 } 00:19:41.123 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:41.123 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:19:41.123 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.123 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.123 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.123 20:12:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:42.063 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:42.063 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:42.063 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:42.063 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:42.064 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:42.064 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:42.064 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.064 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.064 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.064 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.064 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.064 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.064 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.064 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:42.064 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.064 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.064 "name": "raid_bdev1", 00:19:42.064 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:42.064 "strip_size_kb": 0, 00:19:42.064 "state": "online", 00:19:42.064 "raid_level": "raid1", 00:19:42.064 "superblock": true, 00:19:42.064 "num_base_bdevs": 2, 00:19:42.064 "num_base_bdevs_discovered": 1, 00:19:42.064 "num_base_bdevs_operational": 1, 00:19:42.064 "base_bdevs_list": [ 00:19:42.064 { 00:19:42.064 "name": null, 00:19:42.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.064 "is_configured": false, 00:19:42.064 "data_offset": 0, 00:19:42.064 "data_size": 7936 00:19:42.064 }, 00:19:42.064 { 00:19:42.064 "name": "BaseBdev2", 00:19:42.064 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:42.064 "is_configured": true, 00:19:42.064 "data_offset": 256, 00:19:42.064 "data_size": 7936 00:19:42.064 } 00:19:42.064 ] 00:19:42.064 }' 00:19:42.064 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.064 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:42.633 "name": "raid_bdev1", 00:19:42.633 "uuid": "d06afad2-b1ce-4ab1-87b8-61f862c22d29", 00:19:42.633 "strip_size_kb": 0, 00:19:42.633 "state": "online", 00:19:42.633 "raid_level": "raid1", 00:19:42.633 "superblock": true, 00:19:42.633 "num_base_bdevs": 2, 00:19:42.633 "num_base_bdevs_discovered": 1, 00:19:42.633 "num_base_bdevs_operational": 1, 00:19:42.633 "base_bdevs_list": [ 00:19:42.633 { 00:19:42.633 "name": null, 00:19:42.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.633 "is_configured": false, 00:19:42.633 "data_offset": 0, 00:19:42.633 "data_size": 7936 00:19:42.633 }, 00:19:42.633 { 00:19:42.633 "name": "BaseBdev2", 00:19:42.633 "uuid": "f40bb7c1-5987-5a62-976c-73cef3dfa5d5", 00:19:42.633 "is_configured": true, 00:19:42.633 "data_offset": 256, 00:19:42.633 "data_size": 7936 00:19:42.633 } 00:19:42.633 ] 00:19:42.633 }' 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86574 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86574 ']' 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86574 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86574 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86574' 00:19:42.633 killing process with pid 86574 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86574 00:19:42.633 Received shutdown signal, test time was about 60.000000 seconds 00:19:42.633 00:19:42.633 Latency(us) 00:19:42.633 [2024-12-05T20:12:44.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.633 [2024-12-05T20:12:44.070Z] =================================================================================================================== 00:19:42.633 [2024-12-05T20:12:44.070Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:42.633 [2024-12-05 20:12:43.960136] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:42.633 [2024-12-05 20:12:43.960257] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:42.633 20:12:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86574 00:19:42.633 [2024-12-05 20:12:43.960312] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:42.633 [2024-12-05 20:12:43.960332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:42.893 [2024-12-05 20:12:44.238182] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:44.279 20:12:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:19:44.279 00:19:44.279 real 0m19.865s 00:19:44.279 user 0m25.985s 00:19:44.279 sys 0m2.695s 00:19:44.279 20:12:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.279 20:12:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:44.279 ************************************ 00:19:44.279 END TEST raid_rebuild_test_sb_4k 00:19:44.279 ************************************ 00:19:44.279 20:12:45 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:19:44.279 20:12:45 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:19:44.279 20:12:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:44.279 20:12:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.279 20:12:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:44.279 ************************************ 00:19:44.279 START TEST raid_state_function_test_sb_md_separate 00:19:44.279 ************************************ 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87265 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:44.279 Process raid pid: 87265 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87265' 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87265 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87265 ']' 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.279 20:12:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.279 [2024-12-05 20:12:45.458437] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:19:44.279 [2024-12-05 20:12:45.458546] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.279 [2024-12-05 20:12:45.632449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.538 [2024-12-05 20:12:45.736330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.538 [2024-12-05 20:12:45.939799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:44.538 [2024-12-05 20:12:45.939836] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:45.107 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.107 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:45.107 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:45.107 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.107 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.107 [2024-12-05 20:12:46.274732] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:45.107 [2024-12-05 20:12:46.274791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:45.107 [2024-12-05 20:12:46.274800] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:45.107 [2024-12-05 20:12:46.274810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:45.107 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.107 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:45.108 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:45.108 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:45.108 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:45.108 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:45.108 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:45.108 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.108 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.108 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.108 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.108 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.108 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.108 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.108 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.108 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.108 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.108 "name": "Existed_Raid", 00:19:45.108 "uuid": "a238d6f3-c7bf-4dd5-b269-9f319a4bb265", 00:19:45.108 "strip_size_kb": 0, 00:19:45.108 "state": "configuring", 00:19:45.108 "raid_level": "raid1", 00:19:45.108 "superblock": true, 00:19:45.108 "num_base_bdevs": 2, 00:19:45.108 "num_base_bdevs_discovered": 0, 00:19:45.108 "num_base_bdevs_operational": 2, 00:19:45.108 "base_bdevs_list": [ 00:19:45.108 { 00:19:45.108 "name": "BaseBdev1", 00:19:45.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.108 "is_configured": false, 00:19:45.108 "data_offset": 0, 00:19:45.108 "data_size": 0 00:19:45.108 }, 00:19:45.108 { 00:19:45.108 "name": "BaseBdev2", 00:19:45.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.108 "is_configured": false, 00:19:45.108 "data_offset": 0, 00:19:45.108 "data_size": 0 00:19:45.108 } 00:19:45.108 ] 00:19:45.108 }' 00:19:45.108 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.108 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.368 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:45.368 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.368 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.368 [2024-12-05 20:12:46.749868] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:45.368 [2024-12-05 20:12:46.749913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:45.368 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.368 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:45.368 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.368 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.368 [2024-12-05 20:12:46.761847] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:45.368 [2024-12-05 20:12:46.761895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:45.368 [2024-12-05 20:12:46.761904] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:45.368 [2024-12-05 20:12:46.761915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:45.368 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.368 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:19:45.368 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.368 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.628 [2024-12-05 20:12:46.810940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:45.628 BaseBdev1 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.628 [ 00:19:45.628 { 00:19:45.628 "name": "BaseBdev1", 00:19:45.628 "aliases": [ 00:19:45.628 "dd25f271-e043-492f-b1dc-6f995aa61c2d" 00:19:45.628 ], 00:19:45.628 "product_name": "Malloc disk", 00:19:45.628 "block_size": 4096, 00:19:45.628 "num_blocks": 8192, 00:19:45.628 "uuid": "dd25f271-e043-492f-b1dc-6f995aa61c2d", 00:19:45.628 "md_size": 32, 00:19:45.628 "md_interleave": false, 00:19:45.628 "dif_type": 0, 00:19:45.628 "assigned_rate_limits": { 00:19:45.628 "rw_ios_per_sec": 0, 00:19:45.628 "rw_mbytes_per_sec": 0, 00:19:45.628 "r_mbytes_per_sec": 0, 00:19:45.628 "w_mbytes_per_sec": 0 00:19:45.628 }, 00:19:45.628 "claimed": true, 00:19:45.628 "claim_type": "exclusive_write", 00:19:45.628 "zoned": false, 00:19:45.628 "supported_io_types": { 00:19:45.628 "read": true, 00:19:45.628 "write": true, 00:19:45.628 "unmap": true, 00:19:45.628 "flush": true, 00:19:45.628 "reset": true, 00:19:45.628 "nvme_admin": false, 00:19:45.628 "nvme_io": false, 00:19:45.628 "nvme_io_md": false, 00:19:45.628 "write_zeroes": true, 00:19:45.628 "zcopy": true, 00:19:45.628 "get_zone_info": false, 00:19:45.628 "zone_management": false, 00:19:45.628 "zone_append": false, 00:19:45.628 "compare": false, 00:19:45.628 "compare_and_write": false, 00:19:45.628 "abort": true, 00:19:45.628 "seek_hole": false, 00:19:45.628 "seek_data": false, 00:19:45.628 "copy": true, 00:19:45.628 "nvme_iov_md": false 00:19:45.628 }, 00:19:45.628 "memory_domains": [ 00:19:45.628 { 00:19:45.628 "dma_device_id": "system", 00:19:45.628 "dma_device_type": 1 00:19:45.628 }, 00:19:45.628 { 00:19:45.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:45.628 "dma_device_type": 2 00:19:45.628 } 00:19:45.628 ], 00:19:45.628 "driver_specific": {} 00:19:45.628 } 00:19:45.628 ] 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.628 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.629 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.629 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.629 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.629 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.629 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.629 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.629 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.629 "name": "Existed_Raid", 00:19:45.629 "uuid": "2c0270d0-a6e2-4080-986b-23a987f01b08", 00:19:45.629 "strip_size_kb": 0, 00:19:45.629 "state": "configuring", 00:19:45.629 "raid_level": "raid1", 00:19:45.629 "superblock": true, 00:19:45.629 "num_base_bdevs": 2, 00:19:45.629 "num_base_bdevs_discovered": 1, 00:19:45.629 "num_base_bdevs_operational": 2, 00:19:45.629 "base_bdevs_list": [ 00:19:45.629 { 00:19:45.629 "name": "BaseBdev1", 00:19:45.629 "uuid": "dd25f271-e043-492f-b1dc-6f995aa61c2d", 00:19:45.629 "is_configured": true, 00:19:45.629 "data_offset": 256, 00:19:45.629 "data_size": 7936 00:19:45.629 }, 00:19:45.629 { 00:19:45.629 "name": "BaseBdev2", 00:19:45.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.629 "is_configured": false, 00:19:45.629 "data_offset": 0, 00:19:45.629 "data_size": 0 00:19:45.629 } 00:19:45.629 ] 00:19:45.629 }' 00:19:45.629 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.629 20:12:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.889 [2024-12-05 20:12:47.294152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:45.889 [2024-12-05 20:12:47.294210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.889 [2024-12-05 20:12:47.302177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:45.889 [2024-12-05 20:12:47.303906] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:45.889 [2024-12-05 20:12:47.303950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.889 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.149 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.149 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.149 "name": "Existed_Raid", 00:19:46.149 "uuid": "2359739e-a0a1-4764-b3ff-a18085bc29a3", 00:19:46.149 "strip_size_kb": 0, 00:19:46.149 "state": "configuring", 00:19:46.149 "raid_level": "raid1", 00:19:46.149 "superblock": true, 00:19:46.149 "num_base_bdevs": 2, 00:19:46.149 "num_base_bdevs_discovered": 1, 00:19:46.149 "num_base_bdevs_operational": 2, 00:19:46.149 "base_bdevs_list": [ 00:19:46.149 { 00:19:46.149 "name": "BaseBdev1", 00:19:46.149 "uuid": "dd25f271-e043-492f-b1dc-6f995aa61c2d", 00:19:46.149 "is_configured": true, 00:19:46.149 "data_offset": 256, 00:19:46.149 "data_size": 7936 00:19:46.149 }, 00:19:46.149 { 00:19:46.149 "name": "BaseBdev2", 00:19:46.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.149 "is_configured": false, 00:19:46.149 "data_offset": 0, 00:19:46.149 "data_size": 0 00:19:46.149 } 00:19:46.149 ] 00:19:46.149 }' 00:19:46.149 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.149 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.426 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:19:46.426 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.426 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.426 [2024-12-05 20:12:47.829748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.426 [2024-12-05 20:12:47.830539] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:46.426 [2024-12-05 20:12:47.830614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:46.426 [2024-12-05 20:12:47.830871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:46.426 BaseBdev2 00:19:46.426 [2024-12-05 20:12:47.831332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:46.426 [2024-12-05 20:12:47.831397] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:46.426 [2024-12-05 20:12:47.831738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.426 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.426 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:46.426 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:46.426 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:46.426 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:46.426 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:46.426 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:46.426 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:46.426 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.426 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.426 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.426 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:46.426 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.426 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.426 [ 00:19:46.426 { 00:19:46.426 "name": "BaseBdev2", 00:19:46.426 "aliases": [ 00:19:46.426 "e5471b1f-2978-4dcd-b687-559691d35991" 00:19:46.426 ], 00:19:46.426 "product_name": "Malloc disk", 00:19:46.426 "block_size": 4096, 00:19:46.426 "num_blocks": 8192, 00:19:46.426 "uuid": "e5471b1f-2978-4dcd-b687-559691d35991", 00:19:46.426 "md_size": 32, 00:19:46.426 "md_interleave": false, 00:19:46.426 "dif_type": 0, 00:19:46.426 "assigned_rate_limits": { 00:19:46.426 "rw_ios_per_sec": 0, 00:19:46.426 "rw_mbytes_per_sec": 0, 00:19:46.426 "r_mbytes_per_sec": 0, 00:19:46.426 "w_mbytes_per_sec": 0 00:19:46.426 }, 00:19:46.426 "claimed": true, 00:19:46.426 "claim_type": "exclusive_write", 00:19:46.426 "zoned": false, 00:19:46.426 "supported_io_types": { 00:19:46.426 "read": true, 00:19:46.426 "write": true, 00:19:46.426 "unmap": true, 00:19:46.426 "flush": true, 00:19:46.426 "reset": true, 00:19:46.426 "nvme_admin": false, 00:19:46.426 "nvme_io": false, 00:19:46.426 "nvme_io_md": false, 00:19:46.426 "write_zeroes": true, 00:19:46.426 "zcopy": true, 00:19:46.426 "get_zone_info": false, 00:19:46.686 "zone_management": false, 00:19:46.686 "zone_append": false, 00:19:46.686 "compare": false, 00:19:46.686 "compare_and_write": false, 00:19:46.686 "abort": true, 00:19:46.686 "seek_hole": false, 00:19:46.686 "seek_data": false, 00:19:46.686 "copy": true, 00:19:46.686 "nvme_iov_md": false 00:19:46.686 }, 00:19:46.686 "memory_domains": [ 00:19:46.686 { 00:19:46.686 "dma_device_id": "system", 00:19:46.686 "dma_device_type": 1 00:19:46.686 }, 00:19:46.686 { 00:19:46.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.686 "dma_device_type": 2 00:19:46.686 } 00:19:46.686 ], 00:19:46.686 "driver_specific": {} 00:19:46.686 } 00:19:46.686 ] 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.686 "name": "Existed_Raid", 00:19:46.686 "uuid": "2359739e-a0a1-4764-b3ff-a18085bc29a3", 00:19:46.686 "strip_size_kb": 0, 00:19:46.686 "state": "online", 00:19:46.686 "raid_level": "raid1", 00:19:46.686 "superblock": true, 00:19:46.686 "num_base_bdevs": 2, 00:19:46.686 "num_base_bdevs_discovered": 2, 00:19:46.686 "num_base_bdevs_operational": 2, 00:19:46.686 "base_bdevs_list": [ 00:19:46.686 { 00:19:46.686 "name": "BaseBdev1", 00:19:46.686 "uuid": "dd25f271-e043-492f-b1dc-6f995aa61c2d", 00:19:46.686 "is_configured": true, 00:19:46.686 "data_offset": 256, 00:19:46.686 "data_size": 7936 00:19:46.686 }, 00:19:46.686 { 00:19:46.686 "name": "BaseBdev2", 00:19:46.686 "uuid": "e5471b1f-2978-4dcd-b687-559691d35991", 00:19:46.686 "is_configured": true, 00:19:46.686 "data_offset": 256, 00:19:46.686 "data_size": 7936 00:19:46.686 } 00:19:46.686 ] 00:19:46.686 }' 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.686 20:12:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.946 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:46.946 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:46.946 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:46.946 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:46.946 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:46.946 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:46.946 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:46.946 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.946 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.946 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:46.946 [2024-12-05 20:12:48.345145] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:46.946 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:47.206 "name": "Existed_Raid", 00:19:47.206 "aliases": [ 00:19:47.206 "2359739e-a0a1-4764-b3ff-a18085bc29a3" 00:19:47.206 ], 00:19:47.206 "product_name": "Raid Volume", 00:19:47.206 "block_size": 4096, 00:19:47.206 "num_blocks": 7936, 00:19:47.206 "uuid": "2359739e-a0a1-4764-b3ff-a18085bc29a3", 00:19:47.206 "md_size": 32, 00:19:47.206 "md_interleave": false, 00:19:47.206 "dif_type": 0, 00:19:47.206 "assigned_rate_limits": { 00:19:47.206 "rw_ios_per_sec": 0, 00:19:47.206 "rw_mbytes_per_sec": 0, 00:19:47.206 "r_mbytes_per_sec": 0, 00:19:47.206 "w_mbytes_per_sec": 0 00:19:47.206 }, 00:19:47.206 "claimed": false, 00:19:47.206 "zoned": false, 00:19:47.206 "supported_io_types": { 00:19:47.206 "read": true, 00:19:47.206 "write": true, 00:19:47.206 "unmap": false, 00:19:47.206 "flush": false, 00:19:47.206 "reset": true, 00:19:47.206 "nvme_admin": false, 00:19:47.206 "nvme_io": false, 00:19:47.206 "nvme_io_md": false, 00:19:47.206 "write_zeroes": true, 00:19:47.206 "zcopy": false, 00:19:47.206 "get_zone_info": false, 00:19:47.206 "zone_management": false, 00:19:47.206 "zone_append": false, 00:19:47.206 "compare": false, 00:19:47.206 "compare_and_write": false, 00:19:47.206 "abort": false, 00:19:47.206 "seek_hole": false, 00:19:47.206 "seek_data": false, 00:19:47.206 "copy": false, 00:19:47.206 "nvme_iov_md": false 00:19:47.206 }, 00:19:47.206 "memory_domains": [ 00:19:47.206 { 00:19:47.206 "dma_device_id": "system", 00:19:47.206 "dma_device_type": 1 00:19:47.206 }, 00:19:47.206 { 00:19:47.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.206 "dma_device_type": 2 00:19:47.206 }, 00:19:47.206 { 00:19:47.206 "dma_device_id": "system", 00:19:47.206 "dma_device_type": 1 00:19:47.206 }, 00:19:47.206 { 00:19:47.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.206 "dma_device_type": 2 00:19:47.206 } 00:19:47.206 ], 00:19:47.206 "driver_specific": { 00:19:47.206 "raid": { 00:19:47.206 "uuid": "2359739e-a0a1-4764-b3ff-a18085bc29a3", 00:19:47.206 "strip_size_kb": 0, 00:19:47.206 "state": "online", 00:19:47.206 "raid_level": "raid1", 00:19:47.206 "superblock": true, 00:19:47.206 "num_base_bdevs": 2, 00:19:47.206 "num_base_bdevs_discovered": 2, 00:19:47.206 "num_base_bdevs_operational": 2, 00:19:47.206 "base_bdevs_list": [ 00:19:47.206 { 00:19:47.206 "name": "BaseBdev1", 00:19:47.206 "uuid": "dd25f271-e043-492f-b1dc-6f995aa61c2d", 00:19:47.206 "is_configured": true, 00:19:47.206 "data_offset": 256, 00:19:47.206 "data_size": 7936 00:19:47.206 }, 00:19:47.206 { 00:19:47.206 "name": "BaseBdev2", 00:19:47.206 "uuid": "e5471b1f-2978-4dcd-b687-559691d35991", 00:19:47.206 "is_configured": true, 00:19:47.206 "data_offset": 256, 00:19:47.206 "data_size": 7936 00:19:47.206 } 00:19:47.206 ] 00:19:47.206 } 00:19:47.206 } 00:19:47.206 }' 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:47.206 BaseBdev2' 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.206 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.206 [2024-12-05 20:12:48.564652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.466 "name": "Existed_Raid", 00:19:47.466 "uuid": "2359739e-a0a1-4764-b3ff-a18085bc29a3", 00:19:47.466 "strip_size_kb": 0, 00:19:47.466 "state": "online", 00:19:47.466 "raid_level": "raid1", 00:19:47.466 "superblock": true, 00:19:47.466 "num_base_bdevs": 2, 00:19:47.466 "num_base_bdevs_discovered": 1, 00:19:47.466 "num_base_bdevs_operational": 1, 00:19:47.466 "base_bdevs_list": [ 00:19:47.466 { 00:19:47.466 "name": null, 00:19:47.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.466 "is_configured": false, 00:19:47.466 "data_offset": 0, 00:19:47.466 "data_size": 7936 00:19:47.466 }, 00:19:47.466 { 00:19:47.466 "name": "BaseBdev2", 00:19:47.466 "uuid": "e5471b1f-2978-4dcd-b687-559691d35991", 00:19:47.466 "is_configured": true, 00:19:47.466 "data_offset": 256, 00:19:47.466 "data_size": 7936 00:19:47.466 } 00:19:47.466 ] 00:19:47.466 }' 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.466 20:12:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.725 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:47.725 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:47.725 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.725 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.725 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:47.725 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.725 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.985 [2024-12-05 20:12:49.175423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:47.985 [2024-12-05 20:12:49.175522] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:47.985 [2024-12-05 20:12:49.272934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:47.985 [2024-12-05 20:12:49.272989] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:47.985 [2024-12-05 20:12:49.273002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87265 00:19:47.985 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87265 ']' 00:19:47.986 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87265 00:19:47.986 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:47.986 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.986 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87265 00:19:47.986 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:47.986 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:47.986 killing process with pid 87265 00:19:47.986 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87265' 00:19:47.986 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87265 00:19:47.986 [2024-12-05 20:12:49.370804] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:47.986 20:12:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87265 00:19:47.986 [2024-12-05 20:12:49.386925] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:49.366 20:12:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:19:49.366 00:19:49.366 real 0m5.097s 00:19:49.366 user 0m7.377s 00:19:49.366 sys 0m0.897s 00:19:49.366 20:12:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:49.366 20:12:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.366 ************************************ 00:19:49.366 END TEST raid_state_function_test_sb_md_separate 00:19:49.366 ************************************ 00:19:49.366 20:12:50 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:19:49.366 20:12:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:49.366 20:12:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:49.366 20:12:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:49.366 ************************************ 00:19:49.366 START TEST raid_superblock_test_md_separate 00:19:49.366 ************************************ 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87513 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87513 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87513 ']' 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.366 20:12:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.366 [2024-12-05 20:12:50.635554] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:19:49.366 [2024-12-05 20:12:50.635683] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87513 ] 00:19:49.626 [2024-12-05 20:12:50.814870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.626 [2024-12-05 20:12:50.922429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.885 [2024-12-05 20:12:51.111948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:49.885 [2024-12-05 20:12:51.112003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:50.145 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.145 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:50.145 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:50.145 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:50.145 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:50.145 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:50.145 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:50.145 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:50.145 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:50.145 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:50.145 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:19:50.145 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.145 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.145 malloc1 00:19:50.145 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.145 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:50.145 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.145 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.145 [2024-12-05 20:12:51.517271] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:50.146 [2024-12-05 20:12:51.517347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.146 [2024-12-05 20:12:51.517369] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:50.146 [2024-12-05 20:12:51.517379] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.146 [2024-12-05 20:12:51.519180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.146 [2024-12-05 20:12:51.519216] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:50.146 pt1 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.146 malloc2 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.146 [2024-12-05 20:12:51.572807] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:50.146 [2024-12-05 20:12:51.572877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.146 [2024-12-05 20:12:51.572896] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:50.146 [2024-12-05 20:12:51.572918] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.146 [2024-12-05 20:12:51.574681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.146 [2024-12-05 20:12:51.574716] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:50.146 pt2 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.146 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.406 [2024-12-05 20:12:51.584824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:50.406 [2024-12-05 20:12:51.586574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:50.406 [2024-12-05 20:12:51.586740] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:50.406 [2024-12-05 20:12:51.586754] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:50.406 [2024-12-05 20:12:51.586821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:50.406 [2024-12-05 20:12:51.586962] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:50.406 [2024-12-05 20:12:51.586982] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:50.406 [2024-12-05 20:12:51.587073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.406 "name": "raid_bdev1", 00:19:50.406 "uuid": "b2e07d42-0c60-42ad-9369-4891241617ca", 00:19:50.406 "strip_size_kb": 0, 00:19:50.406 "state": "online", 00:19:50.406 "raid_level": "raid1", 00:19:50.406 "superblock": true, 00:19:50.406 "num_base_bdevs": 2, 00:19:50.406 "num_base_bdevs_discovered": 2, 00:19:50.406 "num_base_bdevs_operational": 2, 00:19:50.406 "base_bdevs_list": [ 00:19:50.406 { 00:19:50.406 "name": "pt1", 00:19:50.406 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:50.406 "is_configured": true, 00:19:50.406 "data_offset": 256, 00:19:50.406 "data_size": 7936 00:19:50.406 }, 00:19:50.406 { 00:19:50.406 "name": "pt2", 00:19:50.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:50.406 "is_configured": true, 00:19:50.406 "data_offset": 256, 00:19:50.406 "data_size": 7936 00:19:50.406 } 00:19:50.406 ] 00:19:50.406 }' 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.406 20:12:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.665 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:50.665 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:50.665 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:50.665 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:50.665 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:50.665 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:50.665 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:50.665 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:50.665 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.665 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.665 [2024-12-05 20:12:52.048267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:50.665 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.665 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:50.665 "name": "raid_bdev1", 00:19:50.665 "aliases": [ 00:19:50.665 "b2e07d42-0c60-42ad-9369-4891241617ca" 00:19:50.665 ], 00:19:50.665 "product_name": "Raid Volume", 00:19:50.665 "block_size": 4096, 00:19:50.665 "num_blocks": 7936, 00:19:50.665 "uuid": "b2e07d42-0c60-42ad-9369-4891241617ca", 00:19:50.665 "md_size": 32, 00:19:50.665 "md_interleave": false, 00:19:50.665 "dif_type": 0, 00:19:50.665 "assigned_rate_limits": { 00:19:50.665 "rw_ios_per_sec": 0, 00:19:50.665 "rw_mbytes_per_sec": 0, 00:19:50.665 "r_mbytes_per_sec": 0, 00:19:50.666 "w_mbytes_per_sec": 0 00:19:50.666 }, 00:19:50.666 "claimed": false, 00:19:50.666 "zoned": false, 00:19:50.666 "supported_io_types": { 00:19:50.666 "read": true, 00:19:50.666 "write": true, 00:19:50.666 "unmap": false, 00:19:50.666 "flush": false, 00:19:50.666 "reset": true, 00:19:50.666 "nvme_admin": false, 00:19:50.666 "nvme_io": false, 00:19:50.666 "nvme_io_md": false, 00:19:50.666 "write_zeroes": true, 00:19:50.666 "zcopy": false, 00:19:50.666 "get_zone_info": false, 00:19:50.666 "zone_management": false, 00:19:50.666 "zone_append": false, 00:19:50.666 "compare": false, 00:19:50.666 "compare_and_write": false, 00:19:50.666 "abort": false, 00:19:50.666 "seek_hole": false, 00:19:50.666 "seek_data": false, 00:19:50.666 "copy": false, 00:19:50.666 "nvme_iov_md": false 00:19:50.666 }, 00:19:50.666 "memory_domains": [ 00:19:50.666 { 00:19:50.666 "dma_device_id": "system", 00:19:50.666 "dma_device_type": 1 00:19:50.666 }, 00:19:50.666 { 00:19:50.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.666 "dma_device_type": 2 00:19:50.666 }, 00:19:50.666 { 00:19:50.666 "dma_device_id": "system", 00:19:50.666 "dma_device_type": 1 00:19:50.666 }, 00:19:50.666 { 00:19:50.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.666 "dma_device_type": 2 00:19:50.666 } 00:19:50.666 ], 00:19:50.666 "driver_specific": { 00:19:50.666 "raid": { 00:19:50.666 "uuid": "b2e07d42-0c60-42ad-9369-4891241617ca", 00:19:50.666 "strip_size_kb": 0, 00:19:50.666 "state": "online", 00:19:50.666 "raid_level": "raid1", 00:19:50.666 "superblock": true, 00:19:50.666 "num_base_bdevs": 2, 00:19:50.666 "num_base_bdevs_discovered": 2, 00:19:50.666 "num_base_bdevs_operational": 2, 00:19:50.666 "base_bdevs_list": [ 00:19:50.666 { 00:19:50.666 "name": "pt1", 00:19:50.666 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:50.666 "is_configured": true, 00:19:50.666 "data_offset": 256, 00:19:50.666 "data_size": 7936 00:19:50.666 }, 00:19:50.666 { 00:19:50.666 "name": "pt2", 00:19:50.666 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:50.666 "is_configured": true, 00:19:50.666 "data_offset": 256, 00:19:50.666 "data_size": 7936 00:19:50.666 } 00:19:50.666 ] 00:19:50.666 } 00:19:50.666 } 00:19:50.666 }' 00:19:50.666 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:50.925 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:50.925 pt2' 00:19:50.925 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:50.925 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:50.925 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.926 [2024-12-05 20:12:52.243823] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b2e07d42-0c60-42ad-9369-4891241617ca 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z b2e07d42-0c60-42ad-9369-4891241617ca ']' 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.926 [2024-12-05 20:12:52.287530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:50.926 [2024-12-05 20:12:52.287599] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:50.926 [2024-12-05 20:12:52.287701] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:50.926 [2024-12-05 20:12:52.287768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:50.926 [2024-12-05 20:12:52.287801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.926 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.186 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.186 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:51.186 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.186 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:51.186 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.186 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.186 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:51.186 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:51.186 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:51.186 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:51.186 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:51.186 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.186 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:51.186 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.186 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:51.186 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.186 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.186 [2024-12-05 20:12:52.427298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:51.186 [2024-12-05 20:12:52.429107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:51.186 [2024-12-05 20:12:52.429185] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:51.186 [2024-12-05 20:12:52.429235] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:51.186 [2024-12-05 20:12:52.429249] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:51.186 [2024-12-05 20:12:52.429259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:51.187 request: 00:19:51.187 { 00:19:51.187 "name": "raid_bdev1", 00:19:51.187 "raid_level": "raid1", 00:19:51.187 "base_bdevs": [ 00:19:51.187 "malloc1", 00:19:51.187 "malloc2" 00:19:51.187 ], 00:19:51.187 "superblock": false, 00:19:51.187 "method": "bdev_raid_create", 00:19:51.187 "req_id": 1 00:19:51.187 } 00:19:51.187 Got JSON-RPC error response 00:19:51.187 response: 00:19:51.187 { 00:19:51.187 "code": -17, 00:19:51.187 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:51.187 } 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.187 [2024-12-05 20:12:52.491172] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:51.187 [2024-12-05 20:12:52.491284] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.187 [2024-12-05 20:12:52.491316] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:51.187 [2024-12-05 20:12:52.491344] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.187 [2024-12-05 20:12:52.493213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.187 [2024-12-05 20:12:52.493288] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:51.187 [2024-12-05 20:12:52.493370] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:51.187 [2024-12-05 20:12:52.493458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:51.187 pt1 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.187 "name": "raid_bdev1", 00:19:51.187 "uuid": "b2e07d42-0c60-42ad-9369-4891241617ca", 00:19:51.187 "strip_size_kb": 0, 00:19:51.187 "state": "configuring", 00:19:51.187 "raid_level": "raid1", 00:19:51.187 "superblock": true, 00:19:51.187 "num_base_bdevs": 2, 00:19:51.187 "num_base_bdevs_discovered": 1, 00:19:51.187 "num_base_bdevs_operational": 2, 00:19:51.187 "base_bdevs_list": [ 00:19:51.187 { 00:19:51.187 "name": "pt1", 00:19:51.187 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:51.187 "is_configured": true, 00:19:51.187 "data_offset": 256, 00:19:51.187 "data_size": 7936 00:19:51.187 }, 00:19:51.187 { 00:19:51.187 "name": null, 00:19:51.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:51.187 "is_configured": false, 00:19:51.187 "data_offset": 256, 00:19:51.187 "data_size": 7936 00:19:51.187 } 00:19:51.187 ] 00:19:51.187 }' 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.187 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.757 [2024-12-05 20:12:52.958374] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:51.757 [2024-12-05 20:12:52.958438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.757 [2024-12-05 20:12:52.958456] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:51.757 [2024-12-05 20:12:52.958466] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.757 [2024-12-05 20:12:52.958650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.757 [2024-12-05 20:12:52.958667] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:51.757 [2024-12-05 20:12:52.958706] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:51.757 [2024-12-05 20:12:52.958727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:51.757 [2024-12-05 20:12:52.958830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:51.757 [2024-12-05 20:12:52.958841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:51.757 [2024-12-05 20:12:52.958921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:51.757 [2024-12-05 20:12:52.959040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:51.757 [2024-12-05 20:12:52.959048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:51.757 [2024-12-05 20:12:52.959131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.757 pt2 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.757 20:12:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.757 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.757 "name": "raid_bdev1", 00:19:51.757 "uuid": "b2e07d42-0c60-42ad-9369-4891241617ca", 00:19:51.757 "strip_size_kb": 0, 00:19:51.757 "state": "online", 00:19:51.757 "raid_level": "raid1", 00:19:51.757 "superblock": true, 00:19:51.757 "num_base_bdevs": 2, 00:19:51.757 "num_base_bdevs_discovered": 2, 00:19:51.757 "num_base_bdevs_operational": 2, 00:19:51.757 "base_bdevs_list": [ 00:19:51.757 { 00:19:51.757 "name": "pt1", 00:19:51.757 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:51.757 "is_configured": true, 00:19:51.757 "data_offset": 256, 00:19:51.757 "data_size": 7936 00:19:51.757 }, 00:19:51.757 { 00:19:51.757 "name": "pt2", 00:19:51.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:51.757 "is_configured": true, 00:19:51.757 "data_offset": 256, 00:19:51.757 "data_size": 7936 00:19:51.757 } 00:19:51.757 ] 00:19:51.757 }' 00:19:51.757 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.757 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.017 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:52.017 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:52.017 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:52.017 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:52.017 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:52.017 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:52.017 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:52.017 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.017 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:52.017 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.017 [2024-12-05 20:12:53.417819] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:52.017 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:52.277 "name": "raid_bdev1", 00:19:52.277 "aliases": [ 00:19:52.277 "b2e07d42-0c60-42ad-9369-4891241617ca" 00:19:52.277 ], 00:19:52.277 "product_name": "Raid Volume", 00:19:52.277 "block_size": 4096, 00:19:52.277 "num_blocks": 7936, 00:19:52.277 "uuid": "b2e07d42-0c60-42ad-9369-4891241617ca", 00:19:52.277 "md_size": 32, 00:19:52.277 "md_interleave": false, 00:19:52.277 "dif_type": 0, 00:19:52.277 "assigned_rate_limits": { 00:19:52.277 "rw_ios_per_sec": 0, 00:19:52.277 "rw_mbytes_per_sec": 0, 00:19:52.277 "r_mbytes_per_sec": 0, 00:19:52.277 "w_mbytes_per_sec": 0 00:19:52.277 }, 00:19:52.277 "claimed": false, 00:19:52.277 "zoned": false, 00:19:52.277 "supported_io_types": { 00:19:52.277 "read": true, 00:19:52.277 "write": true, 00:19:52.277 "unmap": false, 00:19:52.277 "flush": false, 00:19:52.277 "reset": true, 00:19:52.277 "nvme_admin": false, 00:19:52.277 "nvme_io": false, 00:19:52.277 "nvme_io_md": false, 00:19:52.277 "write_zeroes": true, 00:19:52.277 "zcopy": false, 00:19:52.277 "get_zone_info": false, 00:19:52.277 "zone_management": false, 00:19:52.277 "zone_append": false, 00:19:52.277 "compare": false, 00:19:52.277 "compare_and_write": false, 00:19:52.277 "abort": false, 00:19:52.277 "seek_hole": false, 00:19:52.277 "seek_data": false, 00:19:52.277 "copy": false, 00:19:52.277 "nvme_iov_md": false 00:19:52.277 }, 00:19:52.277 "memory_domains": [ 00:19:52.277 { 00:19:52.277 "dma_device_id": "system", 00:19:52.277 "dma_device_type": 1 00:19:52.277 }, 00:19:52.277 { 00:19:52.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.277 "dma_device_type": 2 00:19:52.277 }, 00:19:52.277 { 00:19:52.277 "dma_device_id": "system", 00:19:52.277 "dma_device_type": 1 00:19:52.277 }, 00:19:52.277 { 00:19:52.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.277 "dma_device_type": 2 00:19:52.277 } 00:19:52.277 ], 00:19:52.277 "driver_specific": { 00:19:52.277 "raid": { 00:19:52.277 "uuid": "b2e07d42-0c60-42ad-9369-4891241617ca", 00:19:52.277 "strip_size_kb": 0, 00:19:52.277 "state": "online", 00:19:52.277 "raid_level": "raid1", 00:19:52.277 "superblock": true, 00:19:52.277 "num_base_bdevs": 2, 00:19:52.277 "num_base_bdevs_discovered": 2, 00:19:52.277 "num_base_bdevs_operational": 2, 00:19:52.277 "base_bdevs_list": [ 00:19:52.277 { 00:19:52.277 "name": "pt1", 00:19:52.277 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:52.277 "is_configured": true, 00:19:52.277 "data_offset": 256, 00:19:52.277 "data_size": 7936 00:19:52.277 }, 00:19:52.277 { 00:19:52.277 "name": "pt2", 00:19:52.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:52.277 "is_configured": true, 00:19:52.277 "data_offset": 256, 00:19:52.277 "data_size": 7936 00:19:52.277 } 00:19:52.277 ] 00:19:52.277 } 00:19:52.277 } 00:19:52.277 }' 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:52.277 pt2' 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.277 [2024-12-05 20:12:53.621467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' b2e07d42-0c60-42ad-9369-4891241617ca '!=' b2e07d42-0c60-42ad-9369-4891241617ca ']' 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.277 [2024-12-05 20:12:53.669188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.277 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.537 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.537 "name": "raid_bdev1", 00:19:52.537 "uuid": "b2e07d42-0c60-42ad-9369-4891241617ca", 00:19:52.537 "strip_size_kb": 0, 00:19:52.537 "state": "online", 00:19:52.537 "raid_level": "raid1", 00:19:52.537 "superblock": true, 00:19:52.537 "num_base_bdevs": 2, 00:19:52.537 "num_base_bdevs_discovered": 1, 00:19:52.537 "num_base_bdevs_operational": 1, 00:19:52.537 "base_bdevs_list": [ 00:19:52.537 { 00:19:52.537 "name": null, 00:19:52.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.537 "is_configured": false, 00:19:52.537 "data_offset": 0, 00:19:52.537 "data_size": 7936 00:19:52.537 }, 00:19:52.537 { 00:19:52.537 "name": "pt2", 00:19:52.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:52.537 "is_configured": true, 00:19:52.537 "data_offset": 256, 00:19:52.537 "data_size": 7936 00:19:52.537 } 00:19:52.537 ] 00:19:52.537 }' 00:19:52.537 20:12:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.537 20:12:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.797 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:52.797 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.797 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.797 [2024-12-05 20:12:54.172916] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:52.797 [2024-12-05 20:12:54.172939] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:52.797 [2024-12-05 20:12:54.172993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:52.797 [2024-12-05 20:12:54.173030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:52.797 [2024-12-05 20:12:54.173041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:52.797 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.797 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.797 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.797 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:52.797 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.797 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.797 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:52.797 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:52.797 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:52.797 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:52.797 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:52.797 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.797 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.058 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.058 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:53.058 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:53.058 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:53.058 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:53.058 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:19:53.058 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:53.058 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.058 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.058 [2024-12-05 20:12:54.244868] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:53.058 [2024-12-05 20:12:54.244981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.058 [2024-12-05 20:12:54.245014] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:53.058 [2024-12-05 20:12:54.245055] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.058 [2024-12-05 20:12:54.246946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.058 [2024-12-05 20:12:54.247031] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:53.058 [2024-12-05 20:12:54.247095] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:53.058 [2024-12-05 20:12:54.247166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:53.058 [2024-12-05 20:12:54.247286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:53.058 [2024-12-05 20:12:54.247332] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:53.058 [2024-12-05 20:12:54.247424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:53.058 [2024-12-05 20:12:54.247560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:53.058 [2024-12-05 20:12:54.247594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:53.058 [2024-12-05 20:12:54.247718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.058 pt2 00:19:53.058 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.058 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:53.058 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.059 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.059 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.059 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.059 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:53.059 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.059 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.059 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.059 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.059 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.059 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.059 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.059 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.059 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.059 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.059 "name": "raid_bdev1", 00:19:53.059 "uuid": "b2e07d42-0c60-42ad-9369-4891241617ca", 00:19:53.059 "strip_size_kb": 0, 00:19:53.059 "state": "online", 00:19:53.059 "raid_level": "raid1", 00:19:53.059 "superblock": true, 00:19:53.059 "num_base_bdevs": 2, 00:19:53.059 "num_base_bdevs_discovered": 1, 00:19:53.059 "num_base_bdevs_operational": 1, 00:19:53.059 "base_bdevs_list": [ 00:19:53.059 { 00:19:53.059 "name": null, 00:19:53.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.059 "is_configured": false, 00:19:53.059 "data_offset": 256, 00:19:53.059 "data_size": 7936 00:19:53.059 }, 00:19:53.059 { 00:19:53.059 "name": "pt2", 00:19:53.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:53.059 "is_configured": true, 00:19:53.059 "data_offset": 256, 00:19:53.059 "data_size": 7936 00:19:53.059 } 00:19:53.059 ] 00:19:53.059 }' 00:19:53.059 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.059 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.319 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:53.319 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.319 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.319 [2024-12-05 20:12:54.620432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:53.319 [2024-12-05 20:12:54.620505] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:53.319 [2024-12-05 20:12:54.620563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:53.319 [2024-12-05 20:12:54.620611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:53.319 [2024-12-05 20:12:54.620656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:53.319 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.319 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:53.319 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.319 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.319 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.319 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.319 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:53.319 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.320 [2024-12-05 20:12:54.664386] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:53.320 [2024-12-05 20:12:54.664430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.320 [2024-12-05 20:12:54.664445] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:53.320 [2024-12-05 20:12:54.664453] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.320 [2024-12-05 20:12:54.666285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.320 [2024-12-05 20:12:54.666320] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:53.320 [2024-12-05 20:12:54.666361] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:53.320 [2024-12-05 20:12:54.666403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:53.320 [2024-12-05 20:12:54.666507] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:53.320 [2024-12-05 20:12:54.666516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:53.320 [2024-12-05 20:12:54.666531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:53.320 [2024-12-05 20:12:54.666604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:53.320 [2024-12-05 20:12:54.666673] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:53.320 [2024-12-05 20:12:54.666681] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:53.320 [2024-12-05 20:12:54.666728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:53.320 [2024-12-05 20:12:54.666815] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:53.320 [2024-12-05 20:12:54.666824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:53.320 [2024-12-05 20:12:54.666922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.320 pt1 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.320 "name": "raid_bdev1", 00:19:53.320 "uuid": "b2e07d42-0c60-42ad-9369-4891241617ca", 00:19:53.320 "strip_size_kb": 0, 00:19:53.320 "state": "online", 00:19:53.320 "raid_level": "raid1", 00:19:53.320 "superblock": true, 00:19:53.320 "num_base_bdevs": 2, 00:19:53.320 "num_base_bdevs_discovered": 1, 00:19:53.320 "num_base_bdevs_operational": 1, 00:19:53.320 "base_bdevs_list": [ 00:19:53.320 { 00:19:53.320 "name": null, 00:19:53.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.320 "is_configured": false, 00:19:53.320 "data_offset": 256, 00:19:53.320 "data_size": 7936 00:19:53.320 }, 00:19:53.320 { 00:19:53.320 "name": "pt2", 00:19:53.320 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:53.320 "is_configured": true, 00:19:53.320 "data_offset": 256, 00:19:53.320 "data_size": 7936 00:19:53.320 } 00:19:53.320 ] 00:19:53.320 }' 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.320 20:12:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.890 [2024-12-05 20:12:55.107796] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' b2e07d42-0c60-42ad-9369-4891241617ca '!=' b2e07d42-0c60-42ad-9369-4891241617ca ']' 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87513 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87513 ']' 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87513 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87513 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87513' 00:19:53.890 killing process with pid 87513 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87513 00:19:53.890 [2024-12-05 20:12:55.191475] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:53.890 [2024-12-05 20:12:55.191591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:53.890 [2024-12-05 20:12:55.191656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:53.890 20:12:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87513 00:19:53.890 [2024-12-05 20:12:55.191708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:54.149 [2024-12-05 20:12:55.401953] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:55.155 20:12:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:19:55.155 ************************************ 00:19:55.155 END TEST raid_superblock_test_md_separate 00:19:55.155 ************************************ 00:19:55.155 00:19:55.155 real 0m5.928s 00:19:55.155 user 0m8.931s 00:19:55.155 sys 0m1.124s 00:19:55.155 20:12:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:55.155 20:12:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.155 20:12:56 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:19:55.155 20:12:56 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:19:55.155 20:12:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:55.156 20:12:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.156 20:12:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:55.156 ************************************ 00:19:55.156 START TEST raid_rebuild_test_sb_md_separate 00:19:55.156 ************************************ 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87843 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87843 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87843 ']' 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.156 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.416 [2024-12-05 20:12:56.647545] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:19:55.416 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:55.416 Zero copy mechanism will not be used. 00:19:55.416 [2024-12-05 20:12:56.647787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87843 ] 00:19:55.416 [2024-12-05 20:12:56.820552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.675 [2024-12-05 20:12:56.925596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.933 [2024-12-05 20:12:57.121560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:55.933 [2024-12-05 20:12:57.121645] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.192 BaseBdev1_malloc 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.192 [2024-12-05 20:12:57.497772] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:56.192 [2024-12-05 20:12:57.497837] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.192 [2024-12-05 20:12:57.497860] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:56.192 [2024-12-05 20:12:57.497871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.192 [2024-12-05 20:12:57.499756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.192 [2024-12-05 20:12:57.499796] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:56.192 BaseBdev1 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.192 BaseBdev2_malloc 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.192 [2024-12-05 20:12:57.552932] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:56.192 [2024-12-05 20:12:57.553046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.192 [2024-12-05 20:12:57.553070] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:56.192 [2024-12-05 20:12:57.553082] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.192 [2024-12-05 20:12:57.554852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.192 [2024-12-05 20:12:57.554904] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:56.192 BaseBdev2 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.192 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.452 spare_malloc 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.452 spare_delay 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.452 [2024-12-05 20:12:57.653757] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:56.452 [2024-12-05 20:12:57.653819] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.452 [2024-12-05 20:12:57.653838] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:56.452 [2024-12-05 20:12:57.653848] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.452 [2024-12-05 20:12:57.655707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.452 [2024-12-05 20:12:57.655760] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:56.452 spare 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.452 [2024-12-05 20:12:57.665787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:56.452 [2024-12-05 20:12:57.667513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:56.452 [2024-12-05 20:12:57.667766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:56.452 [2024-12-05 20:12:57.667786] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:56.452 [2024-12-05 20:12:57.667865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:56.452 [2024-12-05 20:12:57.667995] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:56.452 [2024-12-05 20:12:57.668006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:56.452 [2024-12-05 20:12:57.668116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.452 "name": "raid_bdev1", 00:19:56.452 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:19:56.452 "strip_size_kb": 0, 00:19:56.452 "state": "online", 00:19:56.452 "raid_level": "raid1", 00:19:56.452 "superblock": true, 00:19:56.452 "num_base_bdevs": 2, 00:19:56.452 "num_base_bdevs_discovered": 2, 00:19:56.452 "num_base_bdevs_operational": 2, 00:19:56.452 "base_bdevs_list": [ 00:19:56.452 { 00:19:56.452 "name": "BaseBdev1", 00:19:56.452 "uuid": "50ecfc45-7f23-5752-a9f1-145a2b2f4e3d", 00:19:56.452 "is_configured": true, 00:19:56.452 "data_offset": 256, 00:19:56.452 "data_size": 7936 00:19:56.452 }, 00:19:56.452 { 00:19:56.452 "name": "BaseBdev2", 00:19:56.452 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:19:56.452 "is_configured": true, 00:19:56.452 "data_offset": 256, 00:19:56.452 "data_size": 7936 00:19:56.452 } 00:19:56.452 ] 00:19:56.452 }' 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.452 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.021 [2024-12-05 20:12:58.161219] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:57.021 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:57.021 [2024-12-05 20:12:58.428679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:57.021 /dev/nbd0 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:57.281 1+0 records in 00:19:57.281 1+0 records out 00:19:57.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452169 s, 9.1 MB/s 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:57.281 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:57.847 7936+0 records in 00:19:57.847 7936+0 records out 00:19:57.847 32505856 bytes (33 MB, 31 MiB) copied, 0.653884 s, 49.7 MB/s 00:19:57.847 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:57.847 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:57.847 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:57.847 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:57.847 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:57.847 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:57.847 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:58.105 [2024-12-05 20:12:59.372969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.105 [2024-12-05 20:12:59.387786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.105 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.105 "name": "raid_bdev1", 00:19:58.105 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:19:58.105 "strip_size_kb": 0, 00:19:58.105 "state": "online", 00:19:58.105 "raid_level": "raid1", 00:19:58.105 "superblock": true, 00:19:58.105 "num_base_bdevs": 2, 00:19:58.105 "num_base_bdevs_discovered": 1, 00:19:58.105 "num_base_bdevs_operational": 1, 00:19:58.105 "base_bdevs_list": [ 00:19:58.106 { 00:19:58.106 "name": null, 00:19:58.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.106 "is_configured": false, 00:19:58.106 "data_offset": 0, 00:19:58.106 "data_size": 7936 00:19:58.106 }, 00:19:58.106 { 00:19:58.106 "name": "BaseBdev2", 00:19:58.106 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:19:58.106 "is_configured": true, 00:19:58.106 "data_offset": 256, 00:19:58.106 "data_size": 7936 00:19:58.106 } 00:19:58.106 ] 00:19:58.106 }' 00:19:58.106 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.106 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.672 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:58.672 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.672 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.672 [2024-12-05 20:12:59.834997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:58.672 [2024-12-05 20:12:59.849390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:58.672 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.672 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:58.672 [2024-12-05 20:12:59.851150] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:59.609 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.609 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:59.609 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:59.609 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:59.609 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:59.609 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.609 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.609 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.609 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.609 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.609 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:59.609 "name": "raid_bdev1", 00:19:59.609 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:19:59.609 "strip_size_kb": 0, 00:19:59.609 "state": "online", 00:19:59.609 "raid_level": "raid1", 00:19:59.609 "superblock": true, 00:19:59.609 "num_base_bdevs": 2, 00:19:59.609 "num_base_bdevs_discovered": 2, 00:19:59.609 "num_base_bdevs_operational": 2, 00:19:59.609 "process": { 00:19:59.609 "type": "rebuild", 00:19:59.609 "target": "spare", 00:19:59.609 "progress": { 00:19:59.609 "blocks": 2560, 00:19:59.609 "percent": 32 00:19:59.609 } 00:19:59.609 }, 00:19:59.609 "base_bdevs_list": [ 00:19:59.609 { 00:19:59.609 "name": "spare", 00:19:59.609 "uuid": "ddb59f90-454f-56b8-b090-85a8436823e5", 00:19:59.610 "is_configured": true, 00:19:59.610 "data_offset": 256, 00:19:59.610 "data_size": 7936 00:19:59.610 }, 00:19:59.610 { 00:19:59.610 "name": "BaseBdev2", 00:19:59.610 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:19:59.610 "is_configured": true, 00:19:59.610 "data_offset": 256, 00:19:59.610 "data_size": 7936 00:19:59.610 } 00:19:59.610 ] 00:19:59.610 }' 00:19:59.610 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:59.610 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:59.610 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:59.610 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:59.610 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:59.610 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.610 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.610 [2024-12-05 20:13:01.011074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:59.869 [2024-12-05 20:13:01.056067] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:59.869 [2024-12-05 20:13:01.056122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:59.869 [2024-12-05 20:13:01.056136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:59.869 [2024-12-05 20:13:01.056147] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:59.869 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.869 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:59.870 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.870 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:59.870 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:59.870 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:59.870 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:59.870 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.870 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.870 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.870 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.870 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.870 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.870 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.870 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.870 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.870 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.870 "name": "raid_bdev1", 00:19:59.870 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:19:59.870 "strip_size_kb": 0, 00:19:59.870 "state": "online", 00:19:59.870 "raid_level": "raid1", 00:19:59.870 "superblock": true, 00:19:59.870 "num_base_bdevs": 2, 00:19:59.870 "num_base_bdevs_discovered": 1, 00:19:59.870 "num_base_bdevs_operational": 1, 00:19:59.870 "base_bdevs_list": [ 00:19:59.870 { 00:19:59.870 "name": null, 00:19:59.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.870 "is_configured": false, 00:19:59.870 "data_offset": 0, 00:19:59.870 "data_size": 7936 00:19:59.870 }, 00:19:59.870 { 00:19:59.870 "name": "BaseBdev2", 00:19:59.870 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:19:59.870 "is_configured": true, 00:19:59.870 "data_offset": 256, 00:19:59.870 "data_size": 7936 00:19:59.870 } 00:19:59.870 ] 00:19:59.870 }' 00:19:59.870 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.870 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.129 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:00.129 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.129 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:00.129 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:00.129 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.129 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.129 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.129 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.129 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.129 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.129 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.129 "name": "raid_bdev1", 00:20:00.129 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:00.129 "strip_size_kb": 0, 00:20:00.129 "state": "online", 00:20:00.129 "raid_level": "raid1", 00:20:00.129 "superblock": true, 00:20:00.129 "num_base_bdevs": 2, 00:20:00.129 "num_base_bdevs_discovered": 1, 00:20:00.129 "num_base_bdevs_operational": 1, 00:20:00.129 "base_bdevs_list": [ 00:20:00.129 { 00:20:00.129 "name": null, 00:20:00.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.129 "is_configured": false, 00:20:00.129 "data_offset": 0, 00:20:00.129 "data_size": 7936 00:20:00.129 }, 00:20:00.129 { 00:20:00.129 "name": "BaseBdev2", 00:20:00.129 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:00.129 "is_configured": true, 00:20:00.129 "data_offset": 256, 00:20:00.129 "data_size": 7936 00:20:00.129 } 00:20:00.129 ] 00:20:00.130 }' 00:20:00.389 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.389 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:00.389 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.389 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:00.389 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:00.389 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.389 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.389 [2024-12-05 20:13:01.670280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:00.389 [2024-12-05 20:13:01.683381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:00.389 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.389 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:00.389 [2024-12-05 20:13:01.685150] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:01.330 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:01.330 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:01.330 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:01.331 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:01.331 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:01.331 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.331 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.331 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.331 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.331 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.331 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.331 "name": "raid_bdev1", 00:20:01.331 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:01.331 "strip_size_kb": 0, 00:20:01.331 "state": "online", 00:20:01.331 "raid_level": "raid1", 00:20:01.331 "superblock": true, 00:20:01.331 "num_base_bdevs": 2, 00:20:01.331 "num_base_bdevs_discovered": 2, 00:20:01.331 "num_base_bdevs_operational": 2, 00:20:01.331 "process": { 00:20:01.331 "type": "rebuild", 00:20:01.331 "target": "spare", 00:20:01.331 "progress": { 00:20:01.331 "blocks": 2560, 00:20:01.331 "percent": 32 00:20:01.331 } 00:20:01.331 }, 00:20:01.331 "base_bdevs_list": [ 00:20:01.331 { 00:20:01.331 "name": "spare", 00:20:01.331 "uuid": "ddb59f90-454f-56b8-b090-85a8436823e5", 00:20:01.331 "is_configured": true, 00:20:01.331 "data_offset": 256, 00:20:01.331 "data_size": 7936 00:20:01.331 }, 00:20:01.331 { 00:20:01.331 "name": "BaseBdev2", 00:20:01.331 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:01.331 "is_configured": true, 00:20:01.331 "data_offset": 256, 00:20:01.331 "data_size": 7936 00:20:01.331 } 00:20:01.331 ] 00:20:01.331 }' 00:20:01.331 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:01.590 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:01.590 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:01.590 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:01.590 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:01.590 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:01.590 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=704 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.591 "name": "raid_bdev1", 00:20:01.591 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:01.591 "strip_size_kb": 0, 00:20:01.591 "state": "online", 00:20:01.591 "raid_level": "raid1", 00:20:01.591 "superblock": true, 00:20:01.591 "num_base_bdevs": 2, 00:20:01.591 "num_base_bdevs_discovered": 2, 00:20:01.591 "num_base_bdevs_operational": 2, 00:20:01.591 "process": { 00:20:01.591 "type": "rebuild", 00:20:01.591 "target": "spare", 00:20:01.591 "progress": { 00:20:01.591 "blocks": 2816, 00:20:01.591 "percent": 35 00:20:01.591 } 00:20:01.591 }, 00:20:01.591 "base_bdevs_list": [ 00:20:01.591 { 00:20:01.591 "name": "spare", 00:20:01.591 "uuid": "ddb59f90-454f-56b8-b090-85a8436823e5", 00:20:01.591 "is_configured": true, 00:20:01.591 "data_offset": 256, 00:20:01.591 "data_size": 7936 00:20:01.591 }, 00:20:01.591 { 00:20:01.591 "name": "BaseBdev2", 00:20:01.591 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:01.591 "is_configured": true, 00:20:01.591 "data_offset": 256, 00:20:01.591 "data_size": 7936 00:20:01.591 } 00:20:01.591 ] 00:20:01.591 }' 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:01.591 20:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:02.530 20:13:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:02.789 20:13:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:02.789 20:13:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.789 20:13:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:02.789 20:13:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:02.789 20:13:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.789 20:13:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.789 20:13:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.789 20:13:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.789 20:13:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:02.789 20:13:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.789 20:13:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.789 "name": "raid_bdev1", 00:20:02.789 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:02.789 "strip_size_kb": 0, 00:20:02.789 "state": "online", 00:20:02.789 "raid_level": "raid1", 00:20:02.789 "superblock": true, 00:20:02.789 "num_base_bdevs": 2, 00:20:02.789 "num_base_bdevs_discovered": 2, 00:20:02.789 "num_base_bdevs_operational": 2, 00:20:02.789 "process": { 00:20:02.789 "type": "rebuild", 00:20:02.789 "target": "spare", 00:20:02.789 "progress": { 00:20:02.789 "blocks": 5632, 00:20:02.789 "percent": 70 00:20:02.789 } 00:20:02.789 }, 00:20:02.789 "base_bdevs_list": [ 00:20:02.789 { 00:20:02.789 "name": "spare", 00:20:02.789 "uuid": "ddb59f90-454f-56b8-b090-85a8436823e5", 00:20:02.789 "is_configured": true, 00:20:02.789 "data_offset": 256, 00:20:02.789 "data_size": 7936 00:20:02.789 }, 00:20:02.789 { 00:20:02.789 "name": "BaseBdev2", 00:20:02.789 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:02.789 "is_configured": true, 00:20:02.789 "data_offset": 256, 00:20:02.789 "data_size": 7936 00:20:02.789 } 00:20:02.789 ] 00:20:02.789 }' 00:20:02.789 20:13:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.789 20:13:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:02.789 20:13:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.789 20:13:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:02.789 20:13:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:03.729 [2024-12-05 20:13:04.797370] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:03.729 [2024-12-05 20:13:04.797439] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:03.729 [2024-12-05 20:13:04.797530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.729 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:03.729 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:03.729 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:03.729 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:03.729 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:03.729 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:03.729 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.729 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.729 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.729 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.729 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.987 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:03.987 "name": "raid_bdev1", 00:20:03.987 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:03.987 "strip_size_kb": 0, 00:20:03.987 "state": "online", 00:20:03.987 "raid_level": "raid1", 00:20:03.987 "superblock": true, 00:20:03.987 "num_base_bdevs": 2, 00:20:03.987 "num_base_bdevs_discovered": 2, 00:20:03.987 "num_base_bdevs_operational": 2, 00:20:03.987 "base_bdevs_list": [ 00:20:03.987 { 00:20:03.987 "name": "spare", 00:20:03.987 "uuid": "ddb59f90-454f-56b8-b090-85a8436823e5", 00:20:03.987 "is_configured": true, 00:20:03.987 "data_offset": 256, 00:20:03.987 "data_size": 7936 00:20:03.987 }, 00:20:03.987 { 00:20:03.987 "name": "BaseBdev2", 00:20:03.987 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:03.987 "is_configured": true, 00:20:03.987 "data_offset": 256, 00:20:03.987 "data_size": 7936 00:20:03.987 } 00:20:03.987 ] 00:20:03.987 }' 00:20:03.987 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:03.987 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:03.987 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:03.987 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:03.987 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:20:03.987 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:03.987 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:03.987 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:03.987 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:03.987 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:03.987 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.987 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.987 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.987 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.987 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.987 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:03.987 "name": "raid_bdev1", 00:20:03.987 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:03.987 "strip_size_kb": 0, 00:20:03.987 "state": "online", 00:20:03.987 "raid_level": "raid1", 00:20:03.987 "superblock": true, 00:20:03.987 "num_base_bdevs": 2, 00:20:03.987 "num_base_bdevs_discovered": 2, 00:20:03.987 "num_base_bdevs_operational": 2, 00:20:03.987 "base_bdevs_list": [ 00:20:03.987 { 00:20:03.987 "name": "spare", 00:20:03.987 "uuid": "ddb59f90-454f-56b8-b090-85a8436823e5", 00:20:03.987 "is_configured": true, 00:20:03.987 "data_offset": 256, 00:20:03.987 "data_size": 7936 00:20:03.987 }, 00:20:03.987 { 00:20:03.987 "name": "BaseBdev2", 00:20:03.987 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:03.987 "is_configured": true, 00:20:03.987 "data_offset": 256, 00:20:03.987 "data_size": 7936 00:20:03.988 } 00:20:03.988 ] 00:20:03.988 }' 00:20:03.988 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:03.988 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:03.988 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.245 "name": "raid_bdev1", 00:20:04.245 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:04.245 "strip_size_kb": 0, 00:20:04.245 "state": "online", 00:20:04.245 "raid_level": "raid1", 00:20:04.245 "superblock": true, 00:20:04.245 "num_base_bdevs": 2, 00:20:04.245 "num_base_bdevs_discovered": 2, 00:20:04.245 "num_base_bdevs_operational": 2, 00:20:04.245 "base_bdevs_list": [ 00:20:04.245 { 00:20:04.245 "name": "spare", 00:20:04.245 "uuid": "ddb59f90-454f-56b8-b090-85a8436823e5", 00:20:04.245 "is_configured": true, 00:20:04.245 "data_offset": 256, 00:20:04.245 "data_size": 7936 00:20:04.245 }, 00:20:04.245 { 00:20:04.245 "name": "BaseBdev2", 00:20:04.245 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:04.245 "is_configured": true, 00:20:04.245 "data_offset": 256, 00:20:04.245 "data_size": 7936 00:20:04.245 } 00:20:04.245 ] 00:20:04.245 }' 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.245 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.503 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:04.503 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.503 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.503 [2024-12-05 20:13:05.910913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:04.503 [2024-12-05 20:13:05.910941] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:04.503 [2024-12-05 20:13:05.911013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:04.503 [2024-12-05 20:13:05.911074] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:04.503 [2024-12-05 20:13:05.911082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:04.503 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.503 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:20:04.503 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.503 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.503 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.503 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.761 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:04.761 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:04.761 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:04.761 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:04.761 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:04.761 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:04.761 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:04.761 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:04.761 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:04.761 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:04.761 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:04.761 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:04.761 20:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:04.761 /dev/nbd0 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:05.020 1+0 records in 00:20:05.020 1+0 records out 00:20:05.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414386 s, 9.9 MB/s 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:05.020 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:05.020 /dev/nbd1 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:05.279 1+0 records in 00:20:05.279 1+0 records out 00:20:05.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440245 s, 9.3 MB/s 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:05.279 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:05.538 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:05.538 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:05.538 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:05.538 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:05.538 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:05.538 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:05.538 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:05.538 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:05.538 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:05.538 20:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:05.797 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:05.797 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:05.797 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:05.797 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:05.797 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:05.797 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:05.797 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:05.797 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:05.797 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.798 [2024-12-05 20:13:07.104591] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:05.798 [2024-12-05 20:13:07.104728] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.798 [2024-12-05 20:13:07.104761] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:05.798 [2024-12-05 20:13:07.104770] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.798 [2024-12-05 20:13:07.106716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.798 [2024-12-05 20:13:07.106756] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:05.798 [2024-12-05 20:13:07.106810] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:05.798 [2024-12-05 20:13:07.106866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:05.798 [2024-12-05 20:13:07.107031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:05.798 spare 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.798 [2024-12-05 20:13:07.206914] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:05.798 [2024-12-05 20:13:07.206944] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:05.798 [2024-12-05 20:13:07.207033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:20:05.798 [2024-12-05 20:13:07.207159] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:05.798 [2024-12-05 20:13:07.207167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:05.798 [2024-12-05 20:13:07.207269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.798 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:06.057 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.057 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.057 "name": "raid_bdev1", 00:20:06.057 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:06.057 "strip_size_kb": 0, 00:20:06.057 "state": "online", 00:20:06.057 "raid_level": "raid1", 00:20:06.057 "superblock": true, 00:20:06.057 "num_base_bdevs": 2, 00:20:06.057 "num_base_bdevs_discovered": 2, 00:20:06.057 "num_base_bdevs_operational": 2, 00:20:06.057 "base_bdevs_list": [ 00:20:06.057 { 00:20:06.057 "name": "spare", 00:20:06.057 "uuid": "ddb59f90-454f-56b8-b090-85a8436823e5", 00:20:06.057 "is_configured": true, 00:20:06.057 "data_offset": 256, 00:20:06.057 "data_size": 7936 00:20:06.057 }, 00:20:06.057 { 00:20:06.057 "name": "BaseBdev2", 00:20:06.057 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:06.057 "is_configured": true, 00:20:06.057 "data_offset": 256, 00:20:06.057 "data_size": 7936 00:20:06.057 } 00:20:06.057 ] 00:20:06.057 }' 00:20:06.057 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.057 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:06.316 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:06.316 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:06.316 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:06.316 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:06.316 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:06.316 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.316 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.316 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.316 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:06.316 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.316 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.316 "name": "raid_bdev1", 00:20:06.316 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:06.316 "strip_size_kb": 0, 00:20:06.316 "state": "online", 00:20:06.316 "raid_level": "raid1", 00:20:06.316 "superblock": true, 00:20:06.316 "num_base_bdevs": 2, 00:20:06.316 "num_base_bdevs_discovered": 2, 00:20:06.316 "num_base_bdevs_operational": 2, 00:20:06.316 "base_bdevs_list": [ 00:20:06.316 { 00:20:06.316 "name": "spare", 00:20:06.316 "uuid": "ddb59f90-454f-56b8-b090-85a8436823e5", 00:20:06.316 "is_configured": true, 00:20:06.316 "data_offset": 256, 00:20:06.316 "data_size": 7936 00:20:06.316 }, 00:20:06.316 { 00:20:06.316 "name": "BaseBdev2", 00:20:06.316 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:06.316 "is_configured": true, 00:20:06.316 "data_offset": 256, 00:20:06.316 "data_size": 7936 00:20:06.316 } 00:20:06.316 ] 00:20:06.316 }' 00:20:06.316 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.316 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:06.316 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:06.576 [2024-12-05 20:13:07.827380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.576 "name": "raid_bdev1", 00:20:06.576 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:06.576 "strip_size_kb": 0, 00:20:06.576 "state": "online", 00:20:06.576 "raid_level": "raid1", 00:20:06.576 "superblock": true, 00:20:06.576 "num_base_bdevs": 2, 00:20:06.576 "num_base_bdevs_discovered": 1, 00:20:06.576 "num_base_bdevs_operational": 1, 00:20:06.576 "base_bdevs_list": [ 00:20:06.576 { 00:20:06.576 "name": null, 00:20:06.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.576 "is_configured": false, 00:20:06.576 "data_offset": 0, 00:20:06.576 "data_size": 7936 00:20:06.576 }, 00:20:06.576 { 00:20:06.576 "name": "BaseBdev2", 00:20:06.576 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:06.576 "is_configured": true, 00:20:06.576 "data_offset": 256, 00:20:06.576 "data_size": 7936 00:20:06.576 } 00:20:06.576 ] 00:20:06.576 }' 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.576 20:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:07.145 20:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:07.145 20:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.145 20:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:07.145 [2024-12-05 20:13:08.282597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:07.145 [2024-12-05 20:13:08.282854] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:07.145 [2024-12-05 20:13:08.282936] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:07.145 [2024-12-05 20:13:08.283018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:07.145 [2024-12-05 20:13:08.296425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:20:07.145 20:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.145 20:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:07.145 [2024-12-05 20:13:08.298243] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:08.083 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:08.083 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.083 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:08.083 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:08.083 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.083 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.083 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.083 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.083 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.083 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.083 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.083 "name": "raid_bdev1", 00:20:08.083 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:08.083 "strip_size_kb": 0, 00:20:08.083 "state": "online", 00:20:08.083 "raid_level": "raid1", 00:20:08.083 "superblock": true, 00:20:08.083 "num_base_bdevs": 2, 00:20:08.083 "num_base_bdevs_discovered": 2, 00:20:08.083 "num_base_bdevs_operational": 2, 00:20:08.083 "process": { 00:20:08.083 "type": "rebuild", 00:20:08.083 "target": "spare", 00:20:08.083 "progress": { 00:20:08.083 "blocks": 2560, 00:20:08.083 "percent": 32 00:20:08.083 } 00:20:08.083 }, 00:20:08.083 "base_bdevs_list": [ 00:20:08.083 { 00:20:08.083 "name": "spare", 00:20:08.083 "uuid": "ddb59f90-454f-56b8-b090-85a8436823e5", 00:20:08.083 "is_configured": true, 00:20:08.083 "data_offset": 256, 00:20:08.083 "data_size": 7936 00:20:08.083 }, 00:20:08.083 { 00:20:08.084 "name": "BaseBdev2", 00:20:08.084 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:08.084 "is_configured": true, 00:20:08.084 "data_offset": 256, 00:20:08.084 "data_size": 7936 00:20:08.084 } 00:20:08.084 ] 00:20:08.084 }' 00:20:08.084 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.084 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:08.084 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.084 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.084 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:08.084 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.084 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.084 [2024-12-05 20:13:09.462665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:08.084 [2024-12-05 20:13:09.503055] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:08.084 [2024-12-05 20:13:09.503106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.084 [2024-12-05 20:13:09.503120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:08.084 [2024-12-05 20:13:09.503139] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.345 "name": "raid_bdev1", 00:20:08.345 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:08.345 "strip_size_kb": 0, 00:20:08.345 "state": "online", 00:20:08.345 "raid_level": "raid1", 00:20:08.345 "superblock": true, 00:20:08.345 "num_base_bdevs": 2, 00:20:08.345 "num_base_bdevs_discovered": 1, 00:20:08.345 "num_base_bdevs_operational": 1, 00:20:08.345 "base_bdevs_list": [ 00:20:08.345 { 00:20:08.345 "name": null, 00:20:08.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.345 "is_configured": false, 00:20:08.345 "data_offset": 0, 00:20:08.345 "data_size": 7936 00:20:08.345 }, 00:20:08.345 { 00:20:08.345 "name": "BaseBdev2", 00:20:08.345 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:08.345 "is_configured": true, 00:20:08.345 "data_offset": 256, 00:20:08.345 "data_size": 7936 00:20:08.345 } 00:20:08.345 ] 00:20:08.345 }' 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.345 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.618 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:08.618 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.618 20:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.618 [2024-12-05 20:13:10.006000] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:08.618 [2024-12-05 20:13:10.006113] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.618 [2024-12-05 20:13:10.006190] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:08.618 [2024-12-05 20:13:10.006230] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.618 [2024-12-05 20:13:10.006498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.618 [2024-12-05 20:13:10.006555] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:08.618 [2024-12-05 20:13:10.006635] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:08.618 [2024-12-05 20:13:10.006671] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:08.618 [2024-12-05 20:13:10.006711] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:08.618 [2024-12-05 20:13:10.006775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:08.618 [2024-12-05 20:13:10.019957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:20:08.618 spare 00:20:08.618 20:13:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.618 [2024-12-05 20:13:10.021759] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:08.618 20:13:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.998 "name": "raid_bdev1", 00:20:09.998 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:09.998 "strip_size_kb": 0, 00:20:09.998 "state": "online", 00:20:09.998 "raid_level": "raid1", 00:20:09.998 "superblock": true, 00:20:09.998 "num_base_bdevs": 2, 00:20:09.998 "num_base_bdevs_discovered": 2, 00:20:09.998 "num_base_bdevs_operational": 2, 00:20:09.998 "process": { 00:20:09.998 "type": "rebuild", 00:20:09.998 "target": "spare", 00:20:09.998 "progress": { 00:20:09.998 "blocks": 2560, 00:20:09.998 "percent": 32 00:20:09.998 } 00:20:09.998 }, 00:20:09.998 "base_bdevs_list": [ 00:20:09.998 { 00:20:09.998 "name": "spare", 00:20:09.998 "uuid": "ddb59f90-454f-56b8-b090-85a8436823e5", 00:20:09.998 "is_configured": true, 00:20:09.998 "data_offset": 256, 00:20:09.998 "data_size": 7936 00:20:09.998 }, 00:20:09.998 { 00:20:09.998 "name": "BaseBdev2", 00:20:09.998 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:09.998 "is_configured": true, 00:20:09.998 "data_offset": 256, 00:20:09.998 "data_size": 7936 00:20:09.998 } 00:20:09.998 ] 00:20:09.998 }' 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.998 [2024-12-05 20:13:11.165796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:09.998 [2024-12-05 20:13:11.226707] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:09.998 [2024-12-05 20:13:11.226764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.998 [2024-12-05 20:13:11.226779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:09.998 [2024-12-05 20:13:11.226786] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.998 "name": "raid_bdev1", 00:20:09.998 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:09.998 "strip_size_kb": 0, 00:20:09.998 "state": "online", 00:20:09.998 "raid_level": "raid1", 00:20:09.998 "superblock": true, 00:20:09.998 "num_base_bdevs": 2, 00:20:09.998 "num_base_bdevs_discovered": 1, 00:20:09.998 "num_base_bdevs_operational": 1, 00:20:09.998 "base_bdevs_list": [ 00:20:09.998 { 00:20:09.998 "name": null, 00:20:09.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.998 "is_configured": false, 00:20:09.998 "data_offset": 0, 00:20:09.998 "data_size": 7936 00:20:09.998 }, 00:20:09.998 { 00:20:09.998 "name": "BaseBdev2", 00:20:09.998 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:09.998 "is_configured": true, 00:20:09.998 "data_offset": 256, 00:20:09.998 "data_size": 7936 00:20:09.998 } 00:20:09.998 ] 00:20:09.998 }' 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.998 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.568 "name": "raid_bdev1", 00:20:10.568 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:10.568 "strip_size_kb": 0, 00:20:10.568 "state": "online", 00:20:10.568 "raid_level": "raid1", 00:20:10.568 "superblock": true, 00:20:10.568 "num_base_bdevs": 2, 00:20:10.568 "num_base_bdevs_discovered": 1, 00:20:10.568 "num_base_bdevs_operational": 1, 00:20:10.568 "base_bdevs_list": [ 00:20:10.568 { 00:20:10.568 "name": null, 00:20:10.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.568 "is_configured": false, 00:20:10.568 "data_offset": 0, 00:20:10.568 "data_size": 7936 00:20:10.568 }, 00:20:10.568 { 00:20:10.568 "name": "BaseBdev2", 00:20:10.568 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:10.568 "is_configured": true, 00:20:10.568 "data_offset": 256, 00:20:10.568 "data_size": 7936 00:20:10.568 } 00:20:10.568 ] 00:20:10.568 }' 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.568 [2024-12-05 20:13:11.909024] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:10.568 [2024-12-05 20:13:11.909072] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.568 [2024-12-05 20:13:11.909092] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:10.568 [2024-12-05 20:13:11.909100] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.568 [2024-12-05 20:13:11.909330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.568 [2024-12-05 20:13:11.909345] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:10.568 [2024-12-05 20:13:11.909391] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:10.568 [2024-12-05 20:13:11.909402] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:10.568 [2024-12-05 20:13:11.909414] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:10.568 [2024-12-05 20:13:11.909423] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:10.568 BaseBdev1 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.568 20:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:11.508 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:11.508 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.508 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.508 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:11.508 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:11.508 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:11.508 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.508 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.508 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.508 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.508 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.508 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.508 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.508 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.767 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.768 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.768 "name": "raid_bdev1", 00:20:11.768 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:11.768 "strip_size_kb": 0, 00:20:11.768 "state": "online", 00:20:11.768 "raid_level": "raid1", 00:20:11.768 "superblock": true, 00:20:11.768 "num_base_bdevs": 2, 00:20:11.768 "num_base_bdevs_discovered": 1, 00:20:11.768 "num_base_bdevs_operational": 1, 00:20:11.768 "base_bdevs_list": [ 00:20:11.768 { 00:20:11.768 "name": null, 00:20:11.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.768 "is_configured": false, 00:20:11.768 "data_offset": 0, 00:20:11.768 "data_size": 7936 00:20:11.768 }, 00:20:11.768 { 00:20:11.768 "name": "BaseBdev2", 00:20:11.768 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:11.768 "is_configured": true, 00:20:11.768 "data_offset": 256, 00:20:11.768 "data_size": 7936 00:20:11.768 } 00:20:11.768 ] 00:20:11.768 }' 00:20:11.768 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.768 20:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:12.027 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:12.027 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.027 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:12.027 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:12.027 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.027 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.027 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.027 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:12.027 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.027 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.027 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.027 "name": "raid_bdev1", 00:20:12.027 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:12.027 "strip_size_kb": 0, 00:20:12.027 "state": "online", 00:20:12.027 "raid_level": "raid1", 00:20:12.027 "superblock": true, 00:20:12.027 "num_base_bdevs": 2, 00:20:12.027 "num_base_bdevs_discovered": 1, 00:20:12.027 "num_base_bdevs_operational": 1, 00:20:12.027 "base_bdevs_list": [ 00:20:12.027 { 00:20:12.027 "name": null, 00:20:12.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.027 "is_configured": false, 00:20:12.027 "data_offset": 0, 00:20:12.027 "data_size": 7936 00:20:12.027 }, 00:20:12.027 { 00:20:12.027 "name": "BaseBdev2", 00:20:12.027 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:12.027 "is_configured": true, 00:20:12.027 "data_offset": 256, 00:20:12.027 "data_size": 7936 00:20:12.027 } 00:20:12.027 ] 00:20:12.027 }' 00:20:12.027 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:12.287 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:12.287 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:12.287 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:12.287 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:12.287 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:20:12.287 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:12.287 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:12.287 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.287 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:12.287 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.287 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:12.287 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.287 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:12.287 [2024-12-05 20:13:13.534773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:12.287 [2024-12-05 20:13:13.535024] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:12.287 [2024-12-05 20:13:13.535045] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:12.287 request: 00:20:12.287 { 00:20:12.288 "base_bdev": "BaseBdev1", 00:20:12.288 "raid_bdev": "raid_bdev1", 00:20:12.288 "method": "bdev_raid_add_base_bdev", 00:20:12.288 "req_id": 1 00:20:12.288 } 00:20:12.288 Got JSON-RPC error response 00:20:12.288 response: 00:20:12.288 { 00:20:12.288 "code": -22, 00:20:12.288 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:12.288 } 00:20:12.288 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:12.288 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:20:12.288 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:12.288 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:12.288 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:12.288 20:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.227 "name": "raid_bdev1", 00:20:13.227 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:13.227 "strip_size_kb": 0, 00:20:13.227 "state": "online", 00:20:13.227 "raid_level": "raid1", 00:20:13.227 "superblock": true, 00:20:13.227 "num_base_bdevs": 2, 00:20:13.227 "num_base_bdevs_discovered": 1, 00:20:13.227 "num_base_bdevs_operational": 1, 00:20:13.227 "base_bdevs_list": [ 00:20:13.227 { 00:20:13.227 "name": null, 00:20:13.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.227 "is_configured": false, 00:20:13.227 "data_offset": 0, 00:20:13.227 "data_size": 7936 00:20:13.227 }, 00:20:13.227 { 00:20:13.227 "name": "BaseBdev2", 00:20:13.227 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:13.227 "is_configured": true, 00:20:13.227 "data_offset": 256, 00:20:13.227 "data_size": 7936 00:20:13.227 } 00:20:13.227 ] 00:20:13.227 }' 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.227 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:13.795 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:13.795 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.795 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:13.795 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:13.795 20:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.795 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.795 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.795 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.795 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:13.795 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.795 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.795 "name": "raid_bdev1", 00:20:13.795 "uuid": "1035dbc2-7748-4b1d-8a44-3820a7320d77", 00:20:13.795 "strip_size_kb": 0, 00:20:13.795 "state": "online", 00:20:13.795 "raid_level": "raid1", 00:20:13.796 "superblock": true, 00:20:13.796 "num_base_bdevs": 2, 00:20:13.796 "num_base_bdevs_discovered": 1, 00:20:13.796 "num_base_bdevs_operational": 1, 00:20:13.796 "base_bdevs_list": [ 00:20:13.796 { 00:20:13.796 "name": null, 00:20:13.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.796 "is_configured": false, 00:20:13.796 "data_offset": 0, 00:20:13.796 "data_size": 7936 00:20:13.796 }, 00:20:13.796 { 00:20:13.796 "name": "BaseBdev2", 00:20:13.796 "uuid": "c284fb73-cb46-5b8f-94c7-cb71a0c585aa", 00:20:13.796 "is_configured": true, 00:20:13.796 "data_offset": 256, 00:20:13.796 "data_size": 7936 00:20:13.796 } 00:20:13.796 ] 00:20:13.796 }' 00:20:13.796 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.796 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:13.796 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.796 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:13.796 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87843 00:20:13.796 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87843 ']' 00:20:13.796 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87843 00:20:13.796 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:20:13.796 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.796 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87843 00:20:13.796 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:13.796 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:13.796 killing process with pid 87843 00:20:13.796 Received shutdown signal, test time was about 60.000000 seconds 00:20:13.796 00:20:13.796 Latency(us) 00:20:13.796 [2024-12-05T20:13:15.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.796 [2024-12-05T20:13:15.233Z] =================================================================================================================== 00:20:13.796 [2024-12-05T20:13:15.233Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:13.796 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87843' 00:20:13.796 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87843 00:20:13.796 [2024-12-05 20:13:15.158926] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:13.796 [2024-12-05 20:13:15.159042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.796 [2024-12-05 20:13:15.159088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:13.796 [2024-12-05 20:13:15.159100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:13.796 20:13:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87843 00:20:14.055 [2024-12-05 20:13:15.457076] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:15.434 ************************************ 00:20:15.434 END TEST raid_rebuild_test_sb_md_separate 00:20:15.434 ************************************ 00:20:15.434 20:13:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:20:15.434 00:20:15.434 real 0m19.962s 00:20:15.434 user 0m26.137s 00:20:15.434 sys 0m2.725s 00:20:15.434 20:13:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:15.434 20:13:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.434 20:13:16 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:20:15.434 20:13:16 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:20:15.434 20:13:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:15.434 20:13:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:15.434 20:13:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:15.434 ************************************ 00:20:15.434 START TEST raid_state_function_test_sb_md_interleaved 00:20:15.434 ************************************ 00:20:15.434 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:20:15.434 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:15.434 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:15.434 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:15.434 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:15.434 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:15.434 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88529 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88529' 00:20:15.435 Process raid pid: 88529 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88529 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88529 ']' 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.435 20:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.435 [2024-12-05 20:13:16.690743] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:20:15.435 [2024-12-05 20:13:16.690861] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.694 [2024-12-05 20:13:16.870265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.694 [2024-12-05 20:13:16.981771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.953 [2024-12-05 20:13:17.179560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:15.953 [2024-12-05 20:13:17.179595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:16.213 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.213 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:16.213 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:16.213 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.213 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.213 [2024-12-05 20:13:17.499993] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:16.213 [2024-12-05 20:13:17.500050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:16.213 [2024-12-05 20:13:17.500060] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:16.213 [2024-12-05 20:13:17.500079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:16.213 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.214 "name": "Existed_Raid", 00:20:16.214 "uuid": "e5d76bcb-b218-4ea6-8acd-83572cf08420", 00:20:16.214 "strip_size_kb": 0, 00:20:16.214 "state": "configuring", 00:20:16.214 "raid_level": "raid1", 00:20:16.214 "superblock": true, 00:20:16.214 "num_base_bdevs": 2, 00:20:16.214 "num_base_bdevs_discovered": 0, 00:20:16.214 "num_base_bdevs_operational": 2, 00:20:16.214 "base_bdevs_list": [ 00:20:16.214 { 00:20:16.214 "name": "BaseBdev1", 00:20:16.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.214 "is_configured": false, 00:20:16.214 "data_offset": 0, 00:20:16.214 "data_size": 0 00:20:16.214 }, 00:20:16.214 { 00:20:16.214 "name": "BaseBdev2", 00:20:16.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.214 "is_configured": false, 00:20:16.214 "data_offset": 0, 00:20:16.214 "data_size": 0 00:20:16.214 } 00:20:16.214 ] 00:20:16.214 }' 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.214 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.783 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:16.783 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.783 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.783 [2024-12-05 20:13:17.947107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:16.783 [2024-12-05 20:13:17.947209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:16.783 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.783 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:16.783 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.783 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.783 [2024-12-05 20:13:17.959092] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:16.783 [2024-12-05 20:13:17.959176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:16.783 [2024-12-05 20:13:17.959224] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:16.783 [2024-12-05 20:13:17.959250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:16.783 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.783 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:20:16.783 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.783 20:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.783 [2024-12-05 20:13:18.006538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:16.783 BaseBdev1 00:20:16.783 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.783 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:16.783 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:16.783 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:16.783 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:16.783 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:16.783 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:16.783 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:16.783 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.783 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.783 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.783 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:16.783 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.783 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.783 [ 00:20:16.783 { 00:20:16.783 "name": "BaseBdev1", 00:20:16.783 "aliases": [ 00:20:16.783 "d34e1faa-dd75-48e3-a461-3f3d38906589" 00:20:16.783 ], 00:20:16.783 "product_name": "Malloc disk", 00:20:16.783 "block_size": 4128, 00:20:16.783 "num_blocks": 8192, 00:20:16.783 "uuid": "d34e1faa-dd75-48e3-a461-3f3d38906589", 00:20:16.783 "md_size": 32, 00:20:16.783 "md_interleave": true, 00:20:16.783 "dif_type": 0, 00:20:16.783 "assigned_rate_limits": { 00:20:16.783 "rw_ios_per_sec": 0, 00:20:16.783 "rw_mbytes_per_sec": 0, 00:20:16.783 "r_mbytes_per_sec": 0, 00:20:16.783 "w_mbytes_per_sec": 0 00:20:16.783 }, 00:20:16.783 "claimed": true, 00:20:16.783 "claim_type": "exclusive_write", 00:20:16.783 "zoned": false, 00:20:16.783 "supported_io_types": { 00:20:16.783 "read": true, 00:20:16.783 "write": true, 00:20:16.783 "unmap": true, 00:20:16.783 "flush": true, 00:20:16.783 "reset": true, 00:20:16.783 "nvme_admin": false, 00:20:16.783 "nvme_io": false, 00:20:16.783 "nvme_io_md": false, 00:20:16.783 "write_zeroes": true, 00:20:16.783 "zcopy": true, 00:20:16.783 "get_zone_info": false, 00:20:16.783 "zone_management": false, 00:20:16.783 "zone_append": false, 00:20:16.783 "compare": false, 00:20:16.783 "compare_and_write": false, 00:20:16.783 "abort": true, 00:20:16.783 "seek_hole": false, 00:20:16.783 "seek_data": false, 00:20:16.783 "copy": true, 00:20:16.783 "nvme_iov_md": false 00:20:16.783 }, 00:20:16.783 "memory_domains": [ 00:20:16.783 { 00:20:16.783 "dma_device_id": "system", 00:20:16.783 "dma_device_type": 1 00:20:16.783 }, 00:20:16.783 { 00:20:16.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.783 "dma_device_type": 2 00:20:16.783 } 00:20:16.783 ], 00:20:16.783 "driver_specific": {} 00:20:16.783 } 00:20:16.783 ] 00:20:16.783 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.784 "name": "Existed_Raid", 00:20:16.784 "uuid": "a214ff88-bfc5-45fa-ac40-10f80b71461b", 00:20:16.784 "strip_size_kb": 0, 00:20:16.784 "state": "configuring", 00:20:16.784 "raid_level": "raid1", 00:20:16.784 "superblock": true, 00:20:16.784 "num_base_bdevs": 2, 00:20:16.784 "num_base_bdevs_discovered": 1, 00:20:16.784 "num_base_bdevs_operational": 2, 00:20:16.784 "base_bdevs_list": [ 00:20:16.784 { 00:20:16.784 "name": "BaseBdev1", 00:20:16.784 "uuid": "d34e1faa-dd75-48e3-a461-3f3d38906589", 00:20:16.784 "is_configured": true, 00:20:16.784 "data_offset": 256, 00:20:16.784 "data_size": 7936 00:20:16.784 }, 00:20:16.784 { 00:20:16.784 "name": "BaseBdev2", 00:20:16.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.784 "is_configured": false, 00:20:16.784 "data_offset": 0, 00:20:16.784 "data_size": 0 00:20:16.784 } 00:20:16.784 ] 00:20:16.784 }' 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.784 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.354 [2024-12-05 20:13:18.509720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:17.354 [2024-12-05 20:13:18.509817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.354 [2024-12-05 20:13:18.521762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:17.354 [2024-12-05 20:13:18.523538] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:17.354 [2024-12-05 20:13:18.523583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.354 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.354 "name": "Existed_Raid", 00:20:17.354 "uuid": "808a2557-ca32-414d-9374-471931b0ad1f", 00:20:17.354 "strip_size_kb": 0, 00:20:17.354 "state": "configuring", 00:20:17.354 "raid_level": "raid1", 00:20:17.354 "superblock": true, 00:20:17.355 "num_base_bdevs": 2, 00:20:17.355 "num_base_bdevs_discovered": 1, 00:20:17.355 "num_base_bdevs_operational": 2, 00:20:17.355 "base_bdevs_list": [ 00:20:17.355 { 00:20:17.355 "name": "BaseBdev1", 00:20:17.355 "uuid": "d34e1faa-dd75-48e3-a461-3f3d38906589", 00:20:17.355 "is_configured": true, 00:20:17.355 "data_offset": 256, 00:20:17.355 "data_size": 7936 00:20:17.355 }, 00:20:17.355 { 00:20:17.355 "name": "BaseBdev2", 00:20:17.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.355 "is_configured": false, 00:20:17.355 "data_offset": 0, 00:20:17.355 "data_size": 0 00:20:17.355 } 00:20:17.355 ] 00:20:17.355 }' 00:20:17.355 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.355 20:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.615 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:20:17.615 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.615 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.615 [2024-12-05 20:13:19.042673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.615 [2024-12-05 20:13:19.042989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:17.615 [2024-12-05 20:13:19.043029] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:17.615 [2024-12-05 20:13:19.043141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:17.615 [2024-12-05 20:13:19.043247] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:17.615 [2024-12-05 20:13:19.043284] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:17.615 [2024-12-05 20:13:19.043376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.615 BaseBdev2 00:20:17.615 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.615 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:17.615 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:17.615 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:17.615 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:17.615 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:17.615 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:17.615 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:17.615 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.615 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.875 [ 00:20:17.875 { 00:20:17.875 "name": "BaseBdev2", 00:20:17.875 "aliases": [ 00:20:17.875 "0cd1a28d-0063-40aa-9643-d06b7b71fc72" 00:20:17.875 ], 00:20:17.875 "product_name": "Malloc disk", 00:20:17.875 "block_size": 4128, 00:20:17.875 "num_blocks": 8192, 00:20:17.875 "uuid": "0cd1a28d-0063-40aa-9643-d06b7b71fc72", 00:20:17.875 "md_size": 32, 00:20:17.875 "md_interleave": true, 00:20:17.875 "dif_type": 0, 00:20:17.875 "assigned_rate_limits": { 00:20:17.875 "rw_ios_per_sec": 0, 00:20:17.875 "rw_mbytes_per_sec": 0, 00:20:17.875 "r_mbytes_per_sec": 0, 00:20:17.875 "w_mbytes_per_sec": 0 00:20:17.875 }, 00:20:17.875 "claimed": true, 00:20:17.875 "claim_type": "exclusive_write", 00:20:17.875 "zoned": false, 00:20:17.875 "supported_io_types": { 00:20:17.875 "read": true, 00:20:17.875 "write": true, 00:20:17.875 "unmap": true, 00:20:17.875 "flush": true, 00:20:17.875 "reset": true, 00:20:17.875 "nvme_admin": false, 00:20:17.875 "nvme_io": false, 00:20:17.875 "nvme_io_md": false, 00:20:17.875 "write_zeroes": true, 00:20:17.875 "zcopy": true, 00:20:17.875 "get_zone_info": false, 00:20:17.875 "zone_management": false, 00:20:17.875 "zone_append": false, 00:20:17.875 "compare": false, 00:20:17.875 "compare_and_write": false, 00:20:17.875 "abort": true, 00:20:17.875 "seek_hole": false, 00:20:17.875 "seek_data": false, 00:20:17.875 "copy": true, 00:20:17.875 "nvme_iov_md": false 00:20:17.875 }, 00:20:17.875 "memory_domains": [ 00:20:17.875 { 00:20:17.875 "dma_device_id": "system", 00:20:17.875 "dma_device_type": 1 00:20:17.875 }, 00:20:17.875 { 00:20:17.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.875 "dma_device_type": 2 00:20:17.875 } 00:20:17.875 ], 00:20:17.875 "driver_specific": {} 00:20:17.875 } 00:20:17.875 ] 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.875 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.875 "name": "Existed_Raid", 00:20:17.875 "uuid": "808a2557-ca32-414d-9374-471931b0ad1f", 00:20:17.875 "strip_size_kb": 0, 00:20:17.875 "state": "online", 00:20:17.875 "raid_level": "raid1", 00:20:17.875 "superblock": true, 00:20:17.875 "num_base_bdevs": 2, 00:20:17.875 "num_base_bdevs_discovered": 2, 00:20:17.875 "num_base_bdevs_operational": 2, 00:20:17.875 "base_bdevs_list": [ 00:20:17.875 { 00:20:17.876 "name": "BaseBdev1", 00:20:17.876 "uuid": "d34e1faa-dd75-48e3-a461-3f3d38906589", 00:20:17.876 "is_configured": true, 00:20:17.876 "data_offset": 256, 00:20:17.876 "data_size": 7936 00:20:17.876 }, 00:20:17.876 { 00:20:17.876 "name": "BaseBdev2", 00:20:17.876 "uuid": "0cd1a28d-0063-40aa-9643-d06b7b71fc72", 00:20:17.876 "is_configured": true, 00:20:17.876 "data_offset": 256, 00:20:17.876 "data_size": 7936 00:20:17.876 } 00:20:17.876 ] 00:20:17.876 }' 00:20:17.876 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.876 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.135 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:18.135 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:18.135 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:18.135 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:18.135 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:18.135 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:18.135 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:18.135 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.135 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.135 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:18.135 [2024-12-05 20:13:19.558248] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:18.395 "name": "Existed_Raid", 00:20:18.395 "aliases": [ 00:20:18.395 "808a2557-ca32-414d-9374-471931b0ad1f" 00:20:18.395 ], 00:20:18.395 "product_name": "Raid Volume", 00:20:18.395 "block_size": 4128, 00:20:18.395 "num_blocks": 7936, 00:20:18.395 "uuid": "808a2557-ca32-414d-9374-471931b0ad1f", 00:20:18.395 "md_size": 32, 00:20:18.395 "md_interleave": true, 00:20:18.395 "dif_type": 0, 00:20:18.395 "assigned_rate_limits": { 00:20:18.395 "rw_ios_per_sec": 0, 00:20:18.395 "rw_mbytes_per_sec": 0, 00:20:18.395 "r_mbytes_per_sec": 0, 00:20:18.395 "w_mbytes_per_sec": 0 00:20:18.395 }, 00:20:18.395 "claimed": false, 00:20:18.395 "zoned": false, 00:20:18.395 "supported_io_types": { 00:20:18.395 "read": true, 00:20:18.395 "write": true, 00:20:18.395 "unmap": false, 00:20:18.395 "flush": false, 00:20:18.395 "reset": true, 00:20:18.395 "nvme_admin": false, 00:20:18.395 "nvme_io": false, 00:20:18.395 "nvme_io_md": false, 00:20:18.395 "write_zeroes": true, 00:20:18.395 "zcopy": false, 00:20:18.395 "get_zone_info": false, 00:20:18.395 "zone_management": false, 00:20:18.395 "zone_append": false, 00:20:18.395 "compare": false, 00:20:18.395 "compare_and_write": false, 00:20:18.395 "abort": false, 00:20:18.395 "seek_hole": false, 00:20:18.395 "seek_data": false, 00:20:18.395 "copy": false, 00:20:18.395 "nvme_iov_md": false 00:20:18.395 }, 00:20:18.395 "memory_domains": [ 00:20:18.395 { 00:20:18.395 "dma_device_id": "system", 00:20:18.395 "dma_device_type": 1 00:20:18.395 }, 00:20:18.395 { 00:20:18.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.395 "dma_device_type": 2 00:20:18.395 }, 00:20:18.395 { 00:20:18.395 "dma_device_id": "system", 00:20:18.395 "dma_device_type": 1 00:20:18.395 }, 00:20:18.395 { 00:20:18.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.395 "dma_device_type": 2 00:20:18.395 } 00:20:18.395 ], 00:20:18.395 "driver_specific": { 00:20:18.395 "raid": { 00:20:18.395 "uuid": "808a2557-ca32-414d-9374-471931b0ad1f", 00:20:18.395 "strip_size_kb": 0, 00:20:18.395 "state": "online", 00:20:18.395 "raid_level": "raid1", 00:20:18.395 "superblock": true, 00:20:18.395 "num_base_bdevs": 2, 00:20:18.395 "num_base_bdevs_discovered": 2, 00:20:18.395 "num_base_bdevs_operational": 2, 00:20:18.395 "base_bdevs_list": [ 00:20:18.395 { 00:20:18.395 "name": "BaseBdev1", 00:20:18.395 "uuid": "d34e1faa-dd75-48e3-a461-3f3d38906589", 00:20:18.395 "is_configured": true, 00:20:18.395 "data_offset": 256, 00:20:18.395 "data_size": 7936 00:20:18.395 }, 00:20:18.395 { 00:20:18.395 "name": "BaseBdev2", 00:20:18.395 "uuid": "0cd1a28d-0063-40aa-9643-d06b7b71fc72", 00:20:18.395 "is_configured": true, 00:20:18.395 "data_offset": 256, 00:20:18.395 "data_size": 7936 00:20:18.395 } 00:20:18.395 ] 00:20:18.395 } 00:20:18.395 } 00:20:18.395 }' 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:18.395 BaseBdev2' 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:18.395 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.396 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.396 [2024-12-05 20:13:19.773647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.655 "name": "Existed_Raid", 00:20:18.655 "uuid": "808a2557-ca32-414d-9374-471931b0ad1f", 00:20:18.655 "strip_size_kb": 0, 00:20:18.655 "state": "online", 00:20:18.655 "raid_level": "raid1", 00:20:18.655 "superblock": true, 00:20:18.655 "num_base_bdevs": 2, 00:20:18.655 "num_base_bdevs_discovered": 1, 00:20:18.655 "num_base_bdevs_operational": 1, 00:20:18.655 "base_bdevs_list": [ 00:20:18.655 { 00:20:18.655 "name": null, 00:20:18.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.655 "is_configured": false, 00:20:18.655 "data_offset": 0, 00:20:18.655 "data_size": 7936 00:20:18.655 }, 00:20:18.655 { 00:20:18.655 "name": "BaseBdev2", 00:20:18.655 "uuid": "0cd1a28d-0063-40aa-9643-d06b7b71fc72", 00:20:18.655 "is_configured": true, 00:20:18.655 "data_offset": 256, 00:20:18.655 "data_size": 7936 00:20:18.655 } 00:20:18.655 ] 00:20:18.655 }' 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.655 20:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.224 [2024-12-05 20:13:20.423138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:19.224 [2024-12-05 20:13:20.423251] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:19.224 [2024-12-05 20:13:20.511976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.224 [2024-12-05 20:13:20.512028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:19.224 [2024-12-05 20:13:20.512041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:19.224 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:19.225 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:19.225 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88529 00:20:19.225 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88529 ']' 00:20:19.225 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88529 00:20:19.225 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:19.225 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.225 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88529 00:20:19.225 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.225 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.225 killing process with pid 88529 00:20:19.225 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88529' 00:20:19.225 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88529 00:20:19.225 [2024-12-05 20:13:20.604854] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:19.225 20:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88529 00:20:19.225 [2024-12-05 20:13:20.620151] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:20.606 ************************************ 00:20:20.606 END TEST raid_state_function_test_sb_md_interleaved 00:20:20.606 ************************************ 00:20:20.606 20:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:20:20.606 00:20:20.606 real 0m5.107s 00:20:20.606 user 0m7.420s 00:20:20.606 sys 0m0.888s 00:20:20.606 20:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:20.606 20:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.606 20:13:21 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:20:20.606 20:13:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:20.606 20:13:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:20.606 20:13:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:20.606 ************************************ 00:20:20.606 START TEST raid_superblock_test_md_interleaved 00:20:20.606 ************************************ 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88781 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88781 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88781 ']' 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.607 20:13:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.607 [2024-12-05 20:13:21.883190] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:20:20.607 [2024-12-05 20:13:21.883447] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88781 ] 00:20:20.867 [2024-12-05 20:13:22.068580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.867 [2024-12-05 20:13:22.172282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.126 [2024-12-05 20:13:22.362882] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.126 [2024-12-05 20:13:22.363016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.386 malloc1 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.386 [2024-12-05 20:13:22.734866] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:21.386 [2024-12-05 20:13:22.734935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.386 [2024-12-05 20:13:22.734957] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:21.386 [2024-12-05 20:13:22.734966] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.386 [2024-12-05 20:13:22.736754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.386 [2024-12-05 20:13:22.736790] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:21.386 pt1 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.386 malloc2 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.386 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.386 [2024-12-05 20:13:22.788563] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:21.386 [2024-12-05 20:13:22.788706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.386 [2024-12-05 20:13:22.788753] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:21.386 [2024-12-05 20:13:22.788781] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.386 [2024-12-05 20:13:22.790564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.387 [2024-12-05 20:13:22.790649] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:21.387 pt2 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.387 [2024-12-05 20:13:22.800578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:21.387 [2024-12-05 20:13:22.802400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:21.387 [2024-12-05 20:13:22.802629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:21.387 [2024-12-05 20:13:22.802673] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:21.387 [2024-12-05 20:13:22.802777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:21.387 [2024-12-05 20:13:22.802899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:21.387 [2024-12-05 20:13:22.802942] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:21.387 [2024-12-05 20:13:22.803043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.387 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.646 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.646 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.646 "name": "raid_bdev1", 00:20:21.646 "uuid": "b2e2d42a-4636-41ac-aa65-2760abc425c7", 00:20:21.646 "strip_size_kb": 0, 00:20:21.646 "state": "online", 00:20:21.646 "raid_level": "raid1", 00:20:21.646 "superblock": true, 00:20:21.646 "num_base_bdevs": 2, 00:20:21.646 "num_base_bdevs_discovered": 2, 00:20:21.646 "num_base_bdevs_operational": 2, 00:20:21.646 "base_bdevs_list": [ 00:20:21.646 { 00:20:21.646 "name": "pt1", 00:20:21.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:21.646 "is_configured": true, 00:20:21.646 "data_offset": 256, 00:20:21.646 "data_size": 7936 00:20:21.646 }, 00:20:21.646 { 00:20:21.646 "name": "pt2", 00:20:21.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:21.646 "is_configured": true, 00:20:21.646 "data_offset": 256, 00:20:21.646 "data_size": 7936 00:20:21.646 } 00:20:21.646 ] 00:20:21.646 }' 00:20:21.646 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.646 20:13:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.905 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:21.905 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:21.905 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:21.905 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:21.905 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:21.905 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:21.905 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:21.905 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.905 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:21.905 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.905 [2024-12-05 20:13:23.268001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:21.905 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.906 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:21.906 "name": "raid_bdev1", 00:20:21.906 "aliases": [ 00:20:21.906 "b2e2d42a-4636-41ac-aa65-2760abc425c7" 00:20:21.906 ], 00:20:21.906 "product_name": "Raid Volume", 00:20:21.906 "block_size": 4128, 00:20:21.906 "num_blocks": 7936, 00:20:21.906 "uuid": "b2e2d42a-4636-41ac-aa65-2760abc425c7", 00:20:21.906 "md_size": 32, 00:20:21.906 "md_interleave": true, 00:20:21.906 "dif_type": 0, 00:20:21.906 "assigned_rate_limits": { 00:20:21.906 "rw_ios_per_sec": 0, 00:20:21.906 "rw_mbytes_per_sec": 0, 00:20:21.906 "r_mbytes_per_sec": 0, 00:20:21.906 "w_mbytes_per_sec": 0 00:20:21.906 }, 00:20:21.906 "claimed": false, 00:20:21.906 "zoned": false, 00:20:21.906 "supported_io_types": { 00:20:21.906 "read": true, 00:20:21.906 "write": true, 00:20:21.906 "unmap": false, 00:20:21.906 "flush": false, 00:20:21.906 "reset": true, 00:20:21.906 "nvme_admin": false, 00:20:21.906 "nvme_io": false, 00:20:21.906 "nvme_io_md": false, 00:20:21.906 "write_zeroes": true, 00:20:21.906 "zcopy": false, 00:20:21.906 "get_zone_info": false, 00:20:21.906 "zone_management": false, 00:20:21.906 "zone_append": false, 00:20:21.906 "compare": false, 00:20:21.906 "compare_and_write": false, 00:20:21.906 "abort": false, 00:20:21.906 "seek_hole": false, 00:20:21.906 "seek_data": false, 00:20:21.906 "copy": false, 00:20:21.906 "nvme_iov_md": false 00:20:21.906 }, 00:20:21.906 "memory_domains": [ 00:20:21.906 { 00:20:21.906 "dma_device_id": "system", 00:20:21.906 "dma_device_type": 1 00:20:21.906 }, 00:20:21.906 { 00:20:21.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.906 "dma_device_type": 2 00:20:21.906 }, 00:20:21.906 { 00:20:21.906 "dma_device_id": "system", 00:20:21.906 "dma_device_type": 1 00:20:21.906 }, 00:20:21.906 { 00:20:21.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.906 "dma_device_type": 2 00:20:21.906 } 00:20:21.906 ], 00:20:21.906 "driver_specific": { 00:20:21.906 "raid": { 00:20:21.906 "uuid": "b2e2d42a-4636-41ac-aa65-2760abc425c7", 00:20:21.906 "strip_size_kb": 0, 00:20:21.906 "state": "online", 00:20:21.906 "raid_level": "raid1", 00:20:21.906 "superblock": true, 00:20:21.906 "num_base_bdevs": 2, 00:20:21.906 "num_base_bdevs_discovered": 2, 00:20:21.906 "num_base_bdevs_operational": 2, 00:20:21.906 "base_bdevs_list": [ 00:20:21.906 { 00:20:21.906 "name": "pt1", 00:20:21.906 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:21.906 "is_configured": true, 00:20:21.906 "data_offset": 256, 00:20:21.906 "data_size": 7936 00:20:21.906 }, 00:20:21.906 { 00:20:21.906 "name": "pt2", 00:20:21.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:21.906 "is_configured": true, 00:20:21.906 "data_offset": 256, 00:20:21.906 "data_size": 7936 00:20:21.906 } 00:20:21.906 ] 00:20:21.906 } 00:20:21.906 } 00:20:21.906 }' 00:20:21.906 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:22.166 pt2' 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:22.166 [2024-12-05 20:13:23.479590] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b2e2d42a-4636-41ac-aa65-2760abc425c7 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z b2e2d42a-4636-41ac-aa65-2760abc425c7 ']' 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.166 [2024-12-05 20:13:23.515284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.166 [2024-12-05 20:13:23.515307] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:22.166 [2024-12-05 20:13:23.515370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:22.166 [2024-12-05 20:13:23.515416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:22.166 [2024-12-05 20:13:23.515427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.166 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.427 [2024-12-05 20:13:23.659053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:22.427 [2024-12-05 20:13:23.660869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:22.427 [2024-12-05 20:13:23.661006] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:22.427 [2024-12-05 20:13:23.661100] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:22.427 [2024-12-05 20:13:23.661144] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.427 [2024-12-05 20:13:23.661192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:22.427 request: 00:20:22.427 { 00:20:22.427 "name": "raid_bdev1", 00:20:22.427 "raid_level": "raid1", 00:20:22.427 "base_bdevs": [ 00:20:22.427 "malloc1", 00:20:22.427 "malloc2" 00:20:22.427 ], 00:20:22.427 "superblock": false, 00:20:22.427 "method": "bdev_raid_create", 00:20:22.427 "req_id": 1 00:20:22.427 } 00:20:22.427 Got JSON-RPC error response 00:20:22.427 response: 00:20:22.427 { 00:20:22.427 "code": -17, 00:20:22.427 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:22.427 } 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.427 [2024-12-05 20:13:23.722944] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:22.427 [2024-12-05 20:13:23.723035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.427 [2024-12-05 20:13:23.723088] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:22.427 [2024-12-05 20:13:23.723118] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.427 [2024-12-05 20:13:23.724916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.427 [2024-12-05 20:13:23.724986] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:22.427 [2024-12-05 20:13:23.725055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:22.427 [2024-12-05 20:13:23.725129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:22.427 pt1 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.427 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.427 "name": "raid_bdev1", 00:20:22.427 "uuid": "b2e2d42a-4636-41ac-aa65-2760abc425c7", 00:20:22.427 "strip_size_kb": 0, 00:20:22.427 "state": "configuring", 00:20:22.427 "raid_level": "raid1", 00:20:22.427 "superblock": true, 00:20:22.427 "num_base_bdevs": 2, 00:20:22.427 "num_base_bdevs_discovered": 1, 00:20:22.427 "num_base_bdevs_operational": 2, 00:20:22.428 "base_bdevs_list": [ 00:20:22.428 { 00:20:22.428 "name": "pt1", 00:20:22.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:22.428 "is_configured": true, 00:20:22.428 "data_offset": 256, 00:20:22.428 "data_size": 7936 00:20:22.428 }, 00:20:22.428 { 00:20:22.428 "name": null, 00:20:22.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:22.428 "is_configured": false, 00:20:22.428 "data_offset": 256, 00:20:22.428 "data_size": 7936 00:20:22.428 } 00:20:22.428 ] 00:20:22.428 }' 00:20:22.428 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.428 20:13:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.004 [2024-12-05 20:13:24.186119] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:23.004 [2024-12-05 20:13:24.186181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.004 [2024-12-05 20:13:24.186200] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:23.004 [2024-12-05 20:13:24.186210] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.004 [2024-12-05 20:13:24.186336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.004 [2024-12-05 20:13:24.186352] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:23.004 [2024-12-05 20:13:24.186391] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:23.004 [2024-12-05 20:13:24.186409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:23.004 [2024-12-05 20:13:24.186482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:23.004 [2024-12-05 20:13:24.186492] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:23.004 [2024-12-05 20:13:24.186558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:23.004 [2024-12-05 20:13:24.186618] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:23.004 [2024-12-05 20:13:24.186625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:23.004 [2024-12-05 20:13:24.186678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.004 pt2 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.004 "name": "raid_bdev1", 00:20:23.004 "uuid": "b2e2d42a-4636-41ac-aa65-2760abc425c7", 00:20:23.004 "strip_size_kb": 0, 00:20:23.004 "state": "online", 00:20:23.004 "raid_level": "raid1", 00:20:23.004 "superblock": true, 00:20:23.004 "num_base_bdevs": 2, 00:20:23.004 "num_base_bdevs_discovered": 2, 00:20:23.004 "num_base_bdevs_operational": 2, 00:20:23.004 "base_bdevs_list": [ 00:20:23.004 { 00:20:23.004 "name": "pt1", 00:20:23.004 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:23.004 "is_configured": true, 00:20:23.004 "data_offset": 256, 00:20:23.004 "data_size": 7936 00:20:23.004 }, 00:20:23.004 { 00:20:23.004 "name": "pt2", 00:20:23.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:23.004 "is_configured": true, 00:20:23.004 "data_offset": 256, 00:20:23.004 "data_size": 7936 00:20:23.004 } 00:20:23.004 ] 00:20:23.004 }' 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.004 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.283 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:23.283 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:23.283 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:23.283 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:23.283 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:23.283 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:23.283 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:23.283 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.283 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.283 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:23.283 [2024-12-05 20:13:24.641601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:23.283 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.283 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:23.283 "name": "raid_bdev1", 00:20:23.283 "aliases": [ 00:20:23.283 "b2e2d42a-4636-41ac-aa65-2760abc425c7" 00:20:23.283 ], 00:20:23.283 "product_name": "Raid Volume", 00:20:23.283 "block_size": 4128, 00:20:23.283 "num_blocks": 7936, 00:20:23.283 "uuid": "b2e2d42a-4636-41ac-aa65-2760abc425c7", 00:20:23.283 "md_size": 32, 00:20:23.283 "md_interleave": true, 00:20:23.283 "dif_type": 0, 00:20:23.283 "assigned_rate_limits": { 00:20:23.283 "rw_ios_per_sec": 0, 00:20:23.283 "rw_mbytes_per_sec": 0, 00:20:23.283 "r_mbytes_per_sec": 0, 00:20:23.283 "w_mbytes_per_sec": 0 00:20:23.283 }, 00:20:23.283 "claimed": false, 00:20:23.283 "zoned": false, 00:20:23.283 "supported_io_types": { 00:20:23.283 "read": true, 00:20:23.283 "write": true, 00:20:23.283 "unmap": false, 00:20:23.283 "flush": false, 00:20:23.283 "reset": true, 00:20:23.283 "nvme_admin": false, 00:20:23.283 "nvme_io": false, 00:20:23.283 "nvme_io_md": false, 00:20:23.283 "write_zeroes": true, 00:20:23.283 "zcopy": false, 00:20:23.283 "get_zone_info": false, 00:20:23.283 "zone_management": false, 00:20:23.283 "zone_append": false, 00:20:23.283 "compare": false, 00:20:23.283 "compare_and_write": false, 00:20:23.283 "abort": false, 00:20:23.283 "seek_hole": false, 00:20:23.283 "seek_data": false, 00:20:23.283 "copy": false, 00:20:23.283 "nvme_iov_md": false 00:20:23.283 }, 00:20:23.283 "memory_domains": [ 00:20:23.283 { 00:20:23.283 "dma_device_id": "system", 00:20:23.283 "dma_device_type": 1 00:20:23.283 }, 00:20:23.283 { 00:20:23.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.283 "dma_device_type": 2 00:20:23.283 }, 00:20:23.283 { 00:20:23.283 "dma_device_id": "system", 00:20:23.283 "dma_device_type": 1 00:20:23.283 }, 00:20:23.283 { 00:20:23.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.283 "dma_device_type": 2 00:20:23.283 } 00:20:23.283 ], 00:20:23.283 "driver_specific": { 00:20:23.283 "raid": { 00:20:23.283 "uuid": "b2e2d42a-4636-41ac-aa65-2760abc425c7", 00:20:23.283 "strip_size_kb": 0, 00:20:23.283 "state": "online", 00:20:23.283 "raid_level": "raid1", 00:20:23.283 "superblock": true, 00:20:23.283 "num_base_bdevs": 2, 00:20:23.283 "num_base_bdevs_discovered": 2, 00:20:23.283 "num_base_bdevs_operational": 2, 00:20:23.283 "base_bdevs_list": [ 00:20:23.283 { 00:20:23.283 "name": "pt1", 00:20:23.283 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:23.283 "is_configured": true, 00:20:23.283 "data_offset": 256, 00:20:23.283 "data_size": 7936 00:20:23.283 }, 00:20:23.283 { 00:20:23.283 "name": "pt2", 00:20:23.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:23.283 "is_configured": true, 00:20:23.283 "data_offset": 256, 00:20:23.283 "data_size": 7936 00:20:23.283 } 00:20:23.283 ] 00:20:23.283 } 00:20:23.283 } 00:20:23.283 }' 00:20:23.283 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:23.562 pt2' 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.562 [2024-12-05 20:13:24.877225] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' b2e2d42a-4636-41ac-aa65-2760abc425c7 '!=' b2e2d42a-4636-41ac-aa65-2760abc425c7 ']' 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.562 [2024-12-05 20:13:24.916963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.562 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.562 "name": "raid_bdev1", 00:20:23.562 "uuid": "b2e2d42a-4636-41ac-aa65-2760abc425c7", 00:20:23.562 "strip_size_kb": 0, 00:20:23.562 "state": "online", 00:20:23.562 "raid_level": "raid1", 00:20:23.562 "superblock": true, 00:20:23.562 "num_base_bdevs": 2, 00:20:23.562 "num_base_bdevs_discovered": 1, 00:20:23.562 "num_base_bdevs_operational": 1, 00:20:23.563 "base_bdevs_list": [ 00:20:23.563 { 00:20:23.563 "name": null, 00:20:23.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.563 "is_configured": false, 00:20:23.563 "data_offset": 0, 00:20:23.563 "data_size": 7936 00:20:23.563 }, 00:20:23.563 { 00:20:23.563 "name": "pt2", 00:20:23.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:23.563 "is_configured": true, 00:20:23.563 "data_offset": 256, 00:20:23.563 "data_size": 7936 00:20:23.563 } 00:20:23.563 ] 00:20:23.563 }' 00:20:23.563 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.563 20:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.145 [2024-12-05 20:13:25.328186] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:24.145 [2024-12-05 20:13:25.328265] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:24.145 [2024-12-05 20:13:25.328362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:24.145 [2024-12-05 20:13:25.328421] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:24.145 [2024-12-05 20:13:25.328502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.145 [2024-12-05 20:13:25.404075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:24.145 [2024-12-05 20:13:25.404135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.145 [2024-12-05 20:13:25.404150] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:24.145 [2024-12-05 20:13:25.404159] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.145 [2024-12-05 20:13:25.406062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.145 [2024-12-05 20:13:25.406149] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:24.145 [2024-12-05 20:13:25.406202] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:24.145 [2024-12-05 20:13:25.406256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:24.145 [2024-12-05 20:13:25.406319] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:24.145 [2024-12-05 20:13:25.406331] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:24.145 [2024-12-05 20:13:25.406420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:24.145 [2024-12-05 20:13:25.406482] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:24.145 [2024-12-05 20:13:25.406489] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:24.145 [2024-12-05 20:13:25.406542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.145 pt2 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.145 "name": "raid_bdev1", 00:20:24.145 "uuid": "b2e2d42a-4636-41ac-aa65-2760abc425c7", 00:20:24.145 "strip_size_kb": 0, 00:20:24.145 "state": "online", 00:20:24.145 "raid_level": "raid1", 00:20:24.145 "superblock": true, 00:20:24.145 "num_base_bdevs": 2, 00:20:24.145 "num_base_bdevs_discovered": 1, 00:20:24.145 "num_base_bdevs_operational": 1, 00:20:24.145 "base_bdevs_list": [ 00:20:24.145 { 00:20:24.145 "name": null, 00:20:24.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.145 "is_configured": false, 00:20:24.145 "data_offset": 256, 00:20:24.145 "data_size": 7936 00:20:24.145 }, 00:20:24.145 { 00:20:24.145 "name": "pt2", 00:20:24.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:24.145 "is_configured": true, 00:20:24.145 "data_offset": 256, 00:20:24.145 "data_size": 7936 00:20:24.145 } 00:20:24.145 ] 00:20:24.145 }' 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.145 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.714 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:24.714 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.714 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.714 [2024-12-05 20:13:25.871235] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:24.714 [2024-12-05 20:13:25.871307] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:24.714 [2024-12-05 20:13:25.871384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:24.714 [2024-12-05 20:13:25.871454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:24.714 [2024-12-05 20:13:25.871502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:24.714 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.714 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.714 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:24.714 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.714 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.714 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.714 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:24.714 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:24.714 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:24.714 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:24.714 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.714 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.714 [2024-12-05 20:13:25.935159] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:24.714 [2024-12-05 20:13:25.935250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.714 [2024-12-05 20:13:25.935283] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:24.714 [2024-12-05 20:13:25.935309] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.714 [2024-12-05 20:13:25.937156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.714 [2024-12-05 20:13:25.937226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:24.714 [2024-12-05 20:13:25.937299] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:24.714 [2024-12-05 20:13:25.937368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:24.715 [2024-12-05 20:13:25.937483] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:24.715 [2024-12-05 20:13:25.937532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:24.715 [2024-12-05 20:13:25.937570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:24.715 [2024-12-05 20:13:25.937693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:24.715 [2024-12-05 20:13:25.937797] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:24.715 [2024-12-05 20:13:25.937834] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:24.715 [2024-12-05 20:13:25.937926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:24.715 [2024-12-05 20:13:25.938014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:24.715 [2024-12-05 20:13:25.938048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:24.715 [2024-12-05 20:13:25.938143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.715 pt1 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.715 "name": "raid_bdev1", 00:20:24.715 "uuid": "b2e2d42a-4636-41ac-aa65-2760abc425c7", 00:20:24.715 "strip_size_kb": 0, 00:20:24.715 "state": "online", 00:20:24.715 "raid_level": "raid1", 00:20:24.715 "superblock": true, 00:20:24.715 "num_base_bdevs": 2, 00:20:24.715 "num_base_bdevs_discovered": 1, 00:20:24.715 "num_base_bdevs_operational": 1, 00:20:24.715 "base_bdevs_list": [ 00:20:24.715 { 00:20:24.715 "name": null, 00:20:24.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.715 "is_configured": false, 00:20:24.715 "data_offset": 256, 00:20:24.715 "data_size": 7936 00:20:24.715 }, 00:20:24.715 { 00:20:24.715 "name": "pt2", 00:20:24.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:24.715 "is_configured": true, 00:20:24.715 "data_offset": 256, 00:20:24.715 "data_size": 7936 00:20:24.715 } 00:20:24.715 ] 00:20:24.715 }' 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.715 20:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.974 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:24.974 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:24.974 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.974 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.234 [2024-12-05 20:13:26.442477] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' b2e2d42a-4636-41ac-aa65-2760abc425c7 '!=' b2e2d42a-4636-41ac-aa65-2760abc425c7 ']' 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88781 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88781 ']' 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88781 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88781 00:20:25.234 killing process with pid 88781 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88781' 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88781 00:20:25.234 [2024-12-05 20:13:26.520600] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:25.234 [2024-12-05 20:13:26.520670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:25.234 [2024-12-05 20:13:26.520714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:25.234 [2024-12-05 20:13:26.520725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:25.234 20:13:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88781 00:20:25.494 [2024-12-05 20:13:26.711678] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:26.433 ************************************ 00:20:26.433 END TEST raid_superblock_test_md_interleaved 00:20:26.433 ************************************ 00:20:26.433 20:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:20:26.433 00:20:26.433 real 0m5.997s 00:20:26.433 user 0m9.106s 00:20:26.433 sys 0m1.128s 00:20:26.434 20:13:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:26.434 20:13:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.434 20:13:27 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:20:26.434 20:13:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:26.434 20:13:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.434 20:13:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:26.434 ************************************ 00:20:26.434 START TEST raid_rebuild_test_sb_md_interleaved 00:20:26.434 ************************************ 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89104 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89104 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89104 ']' 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.434 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.693 [2024-12-05 20:13:27.957993] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:20:26.693 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:26.693 Zero copy mechanism will not be used. 00:20:26.693 [2024-12-05 20:13:27.958199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89104 ] 00:20:26.952 [2024-12-05 20:13:28.131034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.952 [2024-12-05 20:13:28.237412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.212 [2024-12-05 20:13:28.435292] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.212 [2024-12-05 20:13:28.435385] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.472 BaseBdev1_malloc 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.472 [2024-12-05 20:13:28.857496] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:27.472 [2024-12-05 20:13:28.857565] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.472 [2024-12-05 20:13:28.857586] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:27.472 [2024-12-05 20:13:28.857597] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.472 [2024-12-05 20:13:28.859365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.472 [2024-12-05 20:13:28.859405] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:27.472 BaseBdev1 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.472 BaseBdev2_malloc 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.472 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.472 [2024-12-05 20:13:28.906835] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:27.472 [2024-12-05 20:13:28.906907] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.472 [2024-12-05 20:13:28.906926] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:27.472 [2024-12-05 20:13:28.906939] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.732 [2024-12-05 20:13:28.908702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.732 [2024-12-05 20:13:28.908832] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:27.732 BaseBdev2 00:20:27.732 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.732 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:20:27.732 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.732 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.732 spare_malloc 00:20:27.732 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.732 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:27.732 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.732 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.732 spare_delay 00:20:27.732 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.732 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:27.732 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.733 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.733 [2024-12-05 20:13:28.984876] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:27.733 [2024-12-05 20:13:28.984949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.733 [2024-12-05 20:13:28.984968] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:27.733 [2024-12-05 20:13:28.984978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.733 [2024-12-05 20:13:28.986716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.733 [2024-12-05 20:13:28.986756] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:27.733 spare 00:20:27.733 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.733 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:27.733 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.733 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.733 [2024-12-05 20:13:28.996942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:27.733 [2024-12-05 20:13:28.998686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:27.733 [2024-12-05 20:13:28.998874] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:27.733 [2024-12-05 20:13:28.998900] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:27.733 [2024-12-05 20:13:28.998967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:27.733 [2024-12-05 20:13:28.999029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:27.733 [2024-12-05 20:13:28.999037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:27.733 [2024-12-05 20:13:28.999097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.733 "name": "raid_bdev1", 00:20:27.733 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:27.733 "strip_size_kb": 0, 00:20:27.733 "state": "online", 00:20:27.733 "raid_level": "raid1", 00:20:27.733 "superblock": true, 00:20:27.733 "num_base_bdevs": 2, 00:20:27.733 "num_base_bdevs_discovered": 2, 00:20:27.733 "num_base_bdevs_operational": 2, 00:20:27.733 "base_bdevs_list": [ 00:20:27.733 { 00:20:27.733 "name": "BaseBdev1", 00:20:27.733 "uuid": "edbcf0e6-ae2d-5919-b287-2dd0c866447b", 00:20:27.733 "is_configured": true, 00:20:27.733 "data_offset": 256, 00:20:27.733 "data_size": 7936 00:20:27.733 }, 00:20:27.733 { 00:20:27.733 "name": "BaseBdev2", 00:20:27.733 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:27.733 "is_configured": true, 00:20:27.733 "data_offset": 256, 00:20:27.733 "data_size": 7936 00:20:27.733 } 00:20:27.733 ] 00:20:27.733 }' 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.733 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:28.302 [2024-12-05 20:13:29.436668] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.302 [2024-12-05 20:13:29.520268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.302 "name": "raid_bdev1", 00:20:28.302 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:28.302 "strip_size_kb": 0, 00:20:28.302 "state": "online", 00:20:28.302 "raid_level": "raid1", 00:20:28.302 "superblock": true, 00:20:28.302 "num_base_bdevs": 2, 00:20:28.302 "num_base_bdevs_discovered": 1, 00:20:28.302 "num_base_bdevs_operational": 1, 00:20:28.302 "base_bdevs_list": [ 00:20:28.302 { 00:20:28.302 "name": null, 00:20:28.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.302 "is_configured": false, 00:20:28.302 "data_offset": 0, 00:20:28.302 "data_size": 7936 00:20:28.302 }, 00:20:28.302 { 00:20:28.302 "name": "BaseBdev2", 00:20:28.302 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:28.302 "is_configured": true, 00:20:28.302 "data_offset": 256, 00:20:28.302 "data_size": 7936 00:20:28.302 } 00:20:28.302 ] 00:20:28.302 }' 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.302 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.561 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:28.561 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.561 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.561 [2024-12-05 20:13:29.935643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:28.561 [2024-12-05 20:13:29.951117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:28.561 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.561 20:13:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:28.562 [2024-12-05 20:13:29.952876] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:29.942 20:13:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.942 20:13:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.942 20:13:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:29.942 20:13:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:29.942 20:13:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.942 20:13:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.942 20:13:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.942 20:13:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.942 20:13:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.942 20:13:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.942 "name": "raid_bdev1", 00:20:29.942 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:29.942 "strip_size_kb": 0, 00:20:29.942 "state": "online", 00:20:29.942 "raid_level": "raid1", 00:20:29.942 "superblock": true, 00:20:29.942 "num_base_bdevs": 2, 00:20:29.942 "num_base_bdevs_discovered": 2, 00:20:29.942 "num_base_bdevs_operational": 2, 00:20:29.942 "process": { 00:20:29.942 "type": "rebuild", 00:20:29.942 "target": "spare", 00:20:29.942 "progress": { 00:20:29.942 "blocks": 2560, 00:20:29.942 "percent": 32 00:20:29.942 } 00:20:29.942 }, 00:20:29.942 "base_bdevs_list": [ 00:20:29.942 { 00:20:29.942 "name": "spare", 00:20:29.942 "uuid": "cc6eaf30-26f1-5a1e-85ec-b33b93e34728", 00:20:29.942 "is_configured": true, 00:20:29.942 "data_offset": 256, 00:20:29.942 "data_size": 7936 00:20:29.942 }, 00:20:29.942 { 00:20:29.942 "name": "BaseBdev2", 00:20:29.942 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:29.942 "is_configured": true, 00:20:29.942 "data_offset": 256, 00:20:29.942 "data_size": 7936 00:20:29.942 } 00:20:29.942 ] 00:20:29.942 }' 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.942 [2024-12-05 20:13:31.105223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:29.942 [2024-12-05 20:13:31.157617] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:29.942 [2024-12-05 20:13:31.157673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.942 [2024-12-05 20:13:31.157688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:29.942 [2024-12-05 20:13:31.157699] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.942 "name": "raid_bdev1", 00:20:29.942 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:29.942 "strip_size_kb": 0, 00:20:29.942 "state": "online", 00:20:29.942 "raid_level": "raid1", 00:20:29.942 "superblock": true, 00:20:29.942 "num_base_bdevs": 2, 00:20:29.942 "num_base_bdevs_discovered": 1, 00:20:29.942 "num_base_bdevs_operational": 1, 00:20:29.942 "base_bdevs_list": [ 00:20:29.942 { 00:20:29.942 "name": null, 00:20:29.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.942 "is_configured": false, 00:20:29.942 "data_offset": 0, 00:20:29.942 "data_size": 7936 00:20:29.942 }, 00:20:29.942 { 00:20:29.942 "name": "BaseBdev2", 00:20:29.942 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:29.942 "is_configured": true, 00:20:29.942 "data_offset": 256, 00:20:29.942 "data_size": 7936 00:20:29.942 } 00:20:29.942 ] 00:20:29.942 }' 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.942 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.202 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:30.202 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.202 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:30.202 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:30.202 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.462 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.462 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.462 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.462 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.462 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.462 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.462 "name": "raid_bdev1", 00:20:30.462 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:30.462 "strip_size_kb": 0, 00:20:30.462 "state": "online", 00:20:30.462 "raid_level": "raid1", 00:20:30.462 "superblock": true, 00:20:30.462 "num_base_bdevs": 2, 00:20:30.462 "num_base_bdevs_discovered": 1, 00:20:30.462 "num_base_bdevs_operational": 1, 00:20:30.462 "base_bdevs_list": [ 00:20:30.462 { 00:20:30.462 "name": null, 00:20:30.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.462 "is_configured": false, 00:20:30.462 "data_offset": 0, 00:20:30.462 "data_size": 7936 00:20:30.462 }, 00:20:30.462 { 00:20:30.462 "name": "BaseBdev2", 00:20:30.462 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:30.462 "is_configured": true, 00:20:30.462 "data_offset": 256, 00:20:30.462 "data_size": 7936 00:20:30.462 } 00:20:30.462 ] 00:20:30.462 }' 00:20:30.462 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.462 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:30.462 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.462 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:30.462 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:30.462 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.462 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.462 [2024-12-05 20:13:31.781395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:30.462 [2024-12-05 20:13:31.797217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:30.462 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.462 [2024-12-05 20:13:31.799087] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:30.462 20:13:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:31.402 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.402 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.402 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:31.402 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:31.402 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:31.402 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.402 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.402 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.402 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.402 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:31.662 "name": "raid_bdev1", 00:20:31.662 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:31.662 "strip_size_kb": 0, 00:20:31.662 "state": "online", 00:20:31.662 "raid_level": "raid1", 00:20:31.662 "superblock": true, 00:20:31.662 "num_base_bdevs": 2, 00:20:31.662 "num_base_bdevs_discovered": 2, 00:20:31.662 "num_base_bdevs_operational": 2, 00:20:31.662 "process": { 00:20:31.662 "type": "rebuild", 00:20:31.662 "target": "spare", 00:20:31.662 "progress": { 00:20:31.662 "blocks": 2560, 00:20:31.662 "percent": 32 00:20:31.662 } 00:20:31.662 }, 00:20:31.662 "base_bdevs_list": [ 00:20:31.662 { 00:20:31.662 "name": "spare", 00:20:31.662 "uuid": "cc6eaf30-26f1-5a1e-85ec-b33b93e34728", 00:20:31.662 "is_configured": true, 00:20:31.662 "data_offset": 256, 00:20:31.662 "data_size": 7936 00:20:31.662 }, 00:20:31.662 { 00:20:31.662 "name": "BaseBdev2", 00:20:31.662 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:31.662 "is_configured": true, 00:20:31.662 "data_offset": 256, 00:20:31.662 "data_size": 7936 00:20:31.662 } 00:20:31.662 ] 00:20:31.662 }' 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:31.662 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=734 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.662 20:13:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.662 20:13:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:31.662 "name": "raid_bdev1", 00:20:31.662 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:31.662 "strip_size_kb": 0, 00:20:31.662 "state": "online", 00:20:31.662 "raid_level": "raid1", 00:20:31.662 "superblock": true, 00:20:31.662 "num_base_bdevs": 2, 00:20:31.662 "num_base_bdevs_discovered": 2, 00:20:31.662 "num_base_bdevs_operational": 2, 00:20:31.662 "process": { 00:20:31.662 "type": "rebuild", 00:20:31.662 "target": "spare", 00:20:31.662 "progress": { 00:20:31.662 "blocks": 2816, 00:20:31.662 "percent": 35 00:20:31.662 } 00:20:31.662 }, 00:20:31.662 "base_bdevs_list": [ 00:20:31.663 { 00:20:31.663 "name": "spare", 00:20:31.663 "uuid": "cc6eaf30-26f1-5a1e-85ec-b33b93e34728", 00:20:31.663 "is_configured": true, 00:20:31.663 "data_offset": 256, 00:20:31.663 "data_size": 7936 00:20:31.663 }, 00:20:31.663 { 00:20:31.663 "name": "BaseBdev2", 00:20:31.663 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:31.663 "is_configured": true, 00:20:31.663 "data_offset": 256, 00:20:31.663 "data_size": 7936 00:20:31.663 } 00:20:31.663 ] 00:20:31.663 }' 00:20:31.663 20:13:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:31.663 20:13:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:31.663 20:13:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:31.663 20:13:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:31.663 20:13:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:33.043 20:13:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:33.043 20:13:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:33.043 20:13:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:33.043 20:13:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:33.043 20:13:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:33.043 20:13:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:33.043 20:13:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.043 20:13:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.043 20:13:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.043 20:13:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:33.043 20:13:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.043 20:13:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:33.043 "name": "raid_bdev1", 00:20:33.043 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:33.043 "strip_size_kb": 0, 00:20:33.043 "state": "online", 00:20:33.043 "raid_level": "raid1", 00:20:33.043 "superblock": true, 00:20:33.043 "num_base_bdevs": 2, 00:20:33.043 "num_base_bdevs_discovered": 2, 00:20:33.043 "num_base_bdevs_operational": 2, 00:20:33.043 "process": { 00:20:33.043 "type": "rebuild", 00:20:33.043 "target": "spare", 00:20:33.043 "progress": { 00:20:33.043 "blocks": 5632, 00:20:33.043 "percent": 70 00:20:33.043 } 00:20:33.043 }, 00:20:33.043 "base_bdevs_list": [ 00:20:33.043 { 00:20:33.043 "name": "spare", 00:20:33.043 "uuid": "cc6eaf30-26f1-5a1e-85ec-b33b93e34728", 00:20:33.043 "is_configured": true, 00:20:33.043 "data_offset": 256, 00:20:33.043 "data_size": 7936 00:20:33.043 }, 00:20:33.043 { 00:20:33.043 "name": "BaseBdev2", 00:20:33.043 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:33.043 "is_configured": true, 00:20:33.043 "data_offset": 256, 00:20:33.043 "data_size": 7936 00:20:33.043 } 00:20:33.043 ] 00:20:33.043 }' 00:20:33.043 20:13:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:33.043 20:13:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:33.043 20:13:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:33.043 20:13:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:33.043 20:13:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:33.611 [2024-12-05 20:13:34.910963] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:33.611 [2024-12-05 20:13:34.911106] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:33.611 [2024-12-05 20:13:34.911241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.869 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:33.869 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:33.869 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:33.869 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:33.869 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:33.869 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:33.869 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.869 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.869 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:33.869 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.869 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.869 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:33.869 "name": "raid_bdev1", 00:20:33.869 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:33.869 "strip_size_kb": 0, 00:20:33.869 "state": "online", 00:20:33.869 "raid_level": "raid1", 00:20:33.869 "superblock": true, 00:20:33.869 "num_base_bdevs": 2, 00:20:33.869 "num_base_bdevs_discovered": 2, 00:20:33.869 "num_base_bdevs_operational": 2, 00:20:33.869 "base_bdevs_list": [ 00:20:33.869 { 00:20:33.869 "name": "spare", 00:20:33.869 "uuid": "cc6eaf30-26f1-5a1e-85ec-b33b93e34728", 00:20:33.869 "is_configured": true, 00:20:33.869 "data_offset": 256, 00:20:33.869 "data_size": 7936 00:20:33.869 }, 00:20:33.869 { 00:20:33.869 "name": "BaseBdev2", 00:20:33.869 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:33.869 "is_configured": true, 00:20:33.869 "data_offset": 256, 00:20:33.869 "data_size": 7936 00:20:33.869 } 00:20:33.869 ] 00:20:33.869 }' 00:20:33.869 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:33.869 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:33.869 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:34.129 "name": "raid_bdev1", 00:20:34.129 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:34.129 "strip_size_kb": 0, 00:20:34.129 "state": "online", 00:20:34.129 "raid_level": "raid1", 00:20:34.129 "superblock": true, 00:20:34.129 "num_base_bdevs": 2, 00:20:34.129 "num_base_bdevs_discovered": 2, 00:20:34.129 "num_base_bdevs_operational": 2, 00:20:34.129 "base_bdevs_list": [ 00:20:34.129 { 00:20:34.129 "name": "spare", 00:20:34.129 "uuid": "cc6eaf30-26f1-5a1e-85ec-b33b93e34728", 00:20:34.129 "is_configured": true, 00:20:34.129 "data_offset": 256, 00:20:34.129 "data_size": 7936 00:20:34.129 }, 00:20:34.129 { 00:20:34.129 "name": "BaseBdev2", 00:20:34.129 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:34.129 "is_configured": true, 00:20:34.129 "data_offset": 256, 00:20:34.129 "data_size": 7936 00:20:34.129 } 00:20:34.129 ] 00:20:34.129 }' 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.129 "name": "raid_bdev1", 00:20:34.129 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:34.129 "strip_size_kb": 0, 00:20:34.129 "state": "online", 00:20:34.129 "raid_level": "raid1", 00:20:34.129 "superblock": true, 00:20:34.129 "num_base_bdevs": 2, 00:20:34.129 "num_base_bdevs_discovered": 2, 00:20:34.129 "num_base_bdevs_operational": 2, 00:20:34.129 "base_bdevs_list": [ 00:20:34.129 { 00:20:34.129 "name": "spare", 00:20:34.129 "uuid": "cc6eaf30-26f1-5a1e-85ec-b33b93e34728", 00:20:34.129 "is_configured": true, 00:20:34.129 "data_offset": 256, 00:20:34.129 "data_size": 7936 00:20:34.129 }, 00:20:34.129 { 00:20:34.129 "name": "BaseBdev2", 00:20:34.129 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:34.129 "is_configured": true, 00:20:34.129 "data_offset": 256, 00:20:34.129 "data_size": 7936 00:20:34.129 } 00:20:34.129 ] 00:20:34.129 }' 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.129 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.698 [2024-12-05 20:13:35.921352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:34.698 [2024-12-05 20:13:35.921441] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:34.698 [2024-12-05 20:13:35.921542] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.698 [2024-12-05 20:13:35.921648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:34.698 [2024-12-05 20:13:35.921706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.698 [2024-12-05 20:13:35.977241] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:34.698 [2024-12-05 20:13:35.977293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.698 [2024-12-05 20:13:35.977315] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:34.698 [2024-12-05 20:13:35.977323] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.698 [2024-12-05 20:13:35.979238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.698 [2024-12-05 20:13:35.979274] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:34.698 [2024-12-05 20:13:35.979327] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:34.698 [2024-12-05 20:13:35.979374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:34.698 [2024-12-05 20:13:35.979478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:34.698 spare 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.698 20:13:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.698 [2024-12-05 20:13:36.079363] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:34.698 [2024-12-05 20:13:36.079392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:34.698 [2024-12-05 20:13:36.079472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:34.698 [2024-12-05 20:13:36.079544] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:34.698 [2024-12-05 20:13:36.079554] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:34.698 [2024-12-05 20:13:36.079623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.698 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.698 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:34.698 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:34.698 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:34.698 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:34.698 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:34.698 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:34.698 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.698 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.698 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.698 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.699 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.699 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.699 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.699 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.699 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.958 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.958 "name": "raid_bdev1", 00:20:34.958 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:34.958 "strip_size_kb": 0, 00:20:34.958 "state": "online", 00:20:34.958 "raid_level": "raid1", 00:20:34.958 "superblock": true, 00:20:34.958 "num_base_bdevs": 2, 00:20:34.958 "num_base_bdevs_discovered": 2, 00:20:34.958 "num_base_bdevs_operational": 2, 00:20:34.958 "base_bdevs_list": [ 00:20:34.958 { 00:20:34.958 "name": "spare", 00:20:34.958 "uuid": "cc6eaf30-26f1-5a1e-85ec-b33b93e34728", 00:20:34.958 "is_configured": true, 00:20:34.958 "data_offset": 256, 00:20:34.958 "data_size": 7936 00:20:34.958 }, 00:20:34.958 { 00:20:34.958 "name": "BaseBdev2", 00:20:34.958 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:34.958 "is_configured": true, 00:20:34.958 "data_offset": 256, 00:20:34.958 "data_size": 7936 00:20:34.958 } 00:20:34.958 ] 00:20:34.958 }' 00:20:34.958 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.958 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.218 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:35.218 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:35.218 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:35.218 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:35.218 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:35.218 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.218 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.218 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.218 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.218 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.218 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:35.218 "name": "raid_bdev1", 00:20:35.218 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:35.218 "strip_size_kb": 0, 00:20:35.218 "state": "online", 00:20:35.218 "raid_level": "raid1", 00:20:35.218 "superblock": true, 00:20:35.218 "num_base_bdevs": 2, 00:20:35.218 "num_base_bdevs_discovered": 2, 00:20:35.218 "num_base_bdevs_operational": 2, 00:20:35.218 "base_bdevs_list": [ 00:20:35.218 { 00:20:35.218 "name": "spare", 00:20:35.218 "uuid": "cc6eaf30-26f1-5a1e-85ec-b33b93e34728", 00:20:35.218 "is_configured": true, 00:20:35.218 "data_offset": 256, 00:20:35.218 "data_size": 7936 00:20:35.218 }, 00:20:35.218 { 00:20:35.218 "name": "BaseBdev2", 00:20:35.218 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:35.218 "is_configured": true, 00:20:35.218 "data_offset": 256, 00:20:35.218 "data_size": 7936 00:20:35.218 } 00:20:35.218 ] 00:20:35.218 }' 00:20:35.218 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:35.218 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:35.218 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.477 [2024-12-05 20:13:36.708188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.477 "name": "raid_bdev1", 00:20:35.477 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:35.477 "strip_size_kb": 0, 00:20:35.477 "state": "online", 00:20:35.477 "raid_level": "raid1", 00:20:35.477 "superblock": true, 00:20:35.477 "num_base_bdevs": 2, 00:20:35.477 "num_base_bdevs_discovered": 1, 00:20:35.477 "num_base_bdevs_operational": 1, 00:20:35.477 "base_bdevs_list": [ 00:20:35.477 { 00:20:35.477 "name": null, 00:20:35.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.477 "is_configured": false, 00:20:35.477 "data_offset": 0, 00:20:35.477 "data_size": 7936 00:20:35.477 }, 00:20:35.477 { 00:20:35.477 "name": "BaseBdev2", 00:20:35.477 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:35.477 "is_configured": true, 00:20:35.477 "data_offset": 256, 00:20:35.477 "data_size": 7936 00:20:35.477 } 00:20:35.477 ] 00:20:35.477 }' 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.477 20:13:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.737 20:13:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:35.737 20:13:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.737 20:13:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.737 [2024-12-05 20:13:37.155408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:35.737 [2024-12-05 20:13:37.155634] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:35.737 [2024-12-05 20:13:37.155699] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:35.737 [2024-12-05 20:13:37.155788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:35.737 [2024-12-05 20:13:37.170858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:35.737 20:13:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.737 20:13:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:35.737 [2024-12-05 20:13:37.172773] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:37.119 "name": "raid_bdev1", 00:20:37.119 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:37.119 "strip_size_kb": 0, 00:20:37.119 "state": "online", 00:20:37.119 "raid_level": "raid1", 00:20:37.119 "superblock": true, 00:20:37.119 "num_base_bdevs": 2, 00:20:37.119 "num_base_bdevs_discovered": 2, 00:20:37.119 "num_base_bdevs_operational": 2, 00:20:37.119 "process": { 00:20:37.119 "type": "rebuild", 00:20:37.119 "target": "spare", 00:20:37.119 "progress": { 00:20:37.119 "blocks": 2560, 00:20:37.119 "percent": 32 00:20:37.119 } 00:20:37.119 }, 00:20:37.119 "base_bdevs_list": [ 00:20:37.119 { 00:20:37.119 "name": "spare", 00:20:37.119 "uuid": "cc6eaf30-26f1-5a1e-85ec-b33b93e34728", 00:20:37.119 "is_configured": true, 00:20:37.119 "data_offset": 256, 00:20:37.119 "data_size": 7936 00:20:37.119 }, 00:20:37.119 { 00:20:37.119 "name": "BaseBdev2", 00:20:37.119 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:37.119 "is_configured": true, 00:20:37.119 "data_offset": 256, 00:20:37.119 "data_size": 7936 00:20:37.119 } 00:20:37.119 ] 00:20:37.119 }' 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.119 [2024-12-05 20:13:38.308932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:37.119 [2024-12-05 20:13:38.377719] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:37.119 [2024-12-05 20:13:38.377781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.119 [2024-12-05 20:13:38.377795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:37.119 [2024-12-05 20:13:38.377803] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.119 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.119 "name": "raid_bdev1", 00:20:37.120 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:37.120 "strip_size_kb": 0, 00:20:37.120 "state": "online", 00:20:37.120 "raid_level": "raid1", 00:20:37.120 "superblock": true, 00:20:37.120 "num_base_bdevs": 2, 00:20:37.120 "num_base_bdevs_discovered": 1, 00:20:37.120 "num_base_bdevs_operational": 1, 00:20:37.120 "base_bdevs_list": [ 00:20:37.120 { 00:20:37.120 "name": null, 00:20:37.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.120 "is_configured": false, 00:20:37.120 "data_offset": 0, 00:20:37.120 "data_size": 7936 00:20:37.120 }, 00:20:37.120 { 00:20:37.120 "name": "BaseBdev2", 00:20:37.120 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:37.120 "is_configured": true, 00:20:37.120 "data_offset": 256, 00:20:37.120 "data_size": 7936 00:20:37.120 } 00:20:37.120 ] 00:20:37.120 }' 00:20:37.120 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.120 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.687 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:37.687 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.687 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.687 [2024-12-05 20:13:38.825923] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:37.687 [2024-12-05 20:13:38.826083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.687 [2024-12-05 20:13:38.826141] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:37.687 [2024-12-05 20:13:38.826175] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.687 [2024-12-05 20:13:38.826372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.687 [2024-12-05 20:13:38.826431] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:37.687 [2024-12-05 20:13:38.826509] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:37.687 [2024-12-05 20:13:38.826547] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:37.687 [2024-12-05 20:13:38.826584] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:37.687 [2024-12-05 20:13:38.826676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:37.687 [2024-12-05 20:13:38.841386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:37.687 spare 00:20:37.687 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.687 [2024-12-05 20:13:38.843217] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:37.687 20:13:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:38.628 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:38.628 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:38.628 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:38.628 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:38.628 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:38.628 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.628 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.628 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.628 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.629 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.629 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:38.629 "name": "raid_bdev1", 00:20:38.629 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:38.629 "strip_size_kb": 0, 00:20:38.629 "state": "online", 00:20:38.629 "raid_level": "raid1", 00:20:38.629 "superblock": true, 00:20:38.629 "num_base_bdevs": 2, 00:20:38.629 "num_base_bdevs_discovered": 2, 00:20:38.629 "num_base_bdevs_operational": 2, 00:20:38.629 "process": { 00:20:38.629 "type": "rebuild", 00:20:38.629 "target": "spare", 00:20:38.629 "progress": { 00:20:38.629 "blocks": 2560, 00:20:38.629 "percent": 32 00:20:38.629 } 00:20:38.629 }, 00:20:38.629 "base_bdevs_list": [ 00:20:38.629 { 00:20:38.629 "name": "spare", 00:20:38.629 "uuid": "cc6eaf30-26f1-5a1e-85ec-b33b93e34728", 00:20:38.629 "is_configured": true, 00:20:38.629 "data_offset": 256, 00:20:38.629 "data_size": 7936 00:20:38.629 }, 00:20:38.629 { 00:20:38.629 "name": "BaseBdev2", 00:20:38.629 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:38.629 "is_configured": true, 00:20:38.629 "data_offset": 256, 00:20:38.629 "data_size": 7936 00:20:38.629 } 00:20:38.629 ] 00:20:38.629 }' 00:20:38.629 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:38.629 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:38.629 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:38.629 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:38.629 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:38.629 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.629 20:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.629 [2024-12-05 20:13:40.003028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:38.629 [2024-12-05 20:13:40.047907] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:38.629 [2024-12-05 20:13:40.048037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.629 [2024-12-05 20:13:40.048056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:38.629 [2024-12-05 20:13:40.048063] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.889 "name": "raid_bdev1", 00:20:38.889 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:38.889 "strip_size_kb": 0, 00:20:38.889 "state": "online", 00:20:38.889 "raid_level": "raid1", 00:20:38.889 "superblock": true, 00:20:38.889 "num_base_bdevs": 2, 00:20:38.889 "num_base_bdevs_discovered": 1, 00:20:38.889 "num_base_bdevs_operational": 1, 00:20:38.889 "base_bdevs_list": [ 00:20:38.889 { 00:20:38.889 "name": null, 00:20:38.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.889 "is_configured": false, 00:20:38.889 "data_offset": 0, 00:20:38.889 "data_size": 7936 00:20:38.889 }, 00:20:38.889 { 00:20:38.889 "name": "BaseBdev2", 00:20:38.889 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:38.889 "is_configured": true, 00:20:38.889 "data_offset": 256, 00:20:38.889 "data_size": 7936 00:20:38.889 } 00:20:38.889 ] 00:20:38.889 }' 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.889 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:39.148 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:39.148 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.148 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:39.148 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:39.148 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.148 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.148 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.148 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:39.148 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.148 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.148 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.148 "name": "raid_bdev1", 00:20:39.148 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:39.148 "strip_size_kb": 0, 00:20:39.148 "state": "online", 00:20:39.148 "raid_level": "raid1", 00:20:39.148 "superblock": true, 00:20:39.148 "num_base_bdevs": 2, 00:20:39.148 "num_base_bdevs_discovered": 1, 00:20:39.148 "num_base_bdevs_operational": 1, 00:20:39.148 "base_bdevs_list": [ 00:20:39.148 { 00:20:39.148 "name": null, 00:20:39.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.148 "is_configured": false, 00:20:39.148 "data_offset": 0, 00:20:39.148 "data_size": 7936 00:20:39.148 }, 00:20:39.148 { 00:20:39.148 "name": "BaseBdev2", 00:20:39.148 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:39.148 "is_configured": true, 00:20:39.148 "data_offset": 256, 00:20:39.148 "data_size": 7936 00:20:39.148 } 00:20:39.148 ] 00:20:39.148 }' 00:20:39.148 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.408 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:39.408 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.408 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:39.408 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:39.408 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.408 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:39.408 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.408 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:39.408 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.408 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:39.408 [2024-12-05 20:13:40.640988] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:39.408 [2024-12-05 20:13:40.641043] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.408 [2024-12-05 20:13:40.641064] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:39.408 [2024-12-05 20:13:40.641073] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.408 [2024-12-05 20:13:40.641236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.408 [2024-12-05 20:13:40.641249] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:39.408 [2024-12-05 20:13:40.641296] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:39.408 [2024-12-05 20:13:40.641308] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:39.408 [2024-12-05 20:13:40.641318] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:39.408 [2024-12-05 20:13:40.641327] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:39.408 BaseBdev1 00:20:39.408 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.408 20:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.347 "name": "raid_bdev1", 00:20:40.347 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:40.347 "strip_size_kb": 0, 00:20:40.347 "state": "online", 00:20:40.347 "raid_level": "raid1", 00:20:40.347 "superblock": true, 00:20:40.347 "num_base_bdevs": 2, 00:20:40.347 "num_base_bdevs_discovered": 1, 00:20:40.347 "num_base_bdevs_operational": 1, 00:20:40.347 "base_bdevs_list": [ 00:20:40.347 { 00:20:40.347 "name": null, 00:20:40.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.347 "is_configured": false, 00:20:40.347 "data_offset": 0, 00:20:40.347 "data_size": 7936 00:20:40.347 }, 00:20:40.347 { 00:20:40.347 "name": "BaseBdev2", 00:20:40.347 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:40.347 "is_configured": true, 00:20:40.347 "data_offset": 256, 00:20:40.347 "data_size": 7936 00:20:40.347 } 00:20:40.347 ] 00:20:40.347 }' 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.347 20:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:40.916 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:40.916 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:40.916 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:40.916 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:40.916 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:40.916 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.916 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:40.917 "name": "raid_bdev1", 00:20:40.917 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:40.917 "strip_size_kb": 0, 00:20:40.917 "state": "online", 00:20:40.917 "raid_level": "raid1", 00:20:40.917 "superblock": true, 00:20:40.917 "num_base_bdevs": 2, 00:20:40.917 "num_base_bdevs_discovered": 1, 00:20:40.917 "num_base_bdevs_operational": 1, 00:20:40.917 "base_bdevs_list": [ 00:20:40.917 { 00:20:40.917 "name": null, 00:20:40.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.917 "is_configured": false, 00:20:40.917 "data_offset": 0, 00:20:40.917 "data_size": 7936 00:20:40.917 }, 00:20:40.917 { 00:20:40.917 "name": "BaseBdev2", 00:20:40.917 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:40.917 "is_configured": true, 00:20:40.917 "data_offset": 256, 00:20:40.917 "data_size": 7936 00:20:40.917 } 00:20:40.917 ] 00:20:40.917 }' 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:40.917 [2024-12-05 20:13:42.226525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:40.917 [2024-12-05 20:13:42.226750] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:40.917 [2024-12-05 20:13:42.226812] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:40.917 request: 00:20:40.917 { 00:20:40.917 "base_bdev": "BaseBdev1", 00:20:40.917 "raid_bdev": "raid_bdev1", 00:20:40.917 "method": "bdev_raid_add_base_bdev", 00:20:40.917 "req_id": 1 00:20:40.917 } 00:20:40.917 Got JSON-RPC error response 00:20:40.917 response: 00:20:40.917 { 00:20:40.917 "code": -22, 00:20:40.917 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:40.917 } 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:40.917 20:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:41.855 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:41.855 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.855 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.855 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.855 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.855 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:41.855 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.855 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.855 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.855 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.855 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.855 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.855 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.855 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.855 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.114 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.114 "name": "raid_bdev1", 00:20:42.114 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:42.114 "strip_size_kb": 0, 00:20:42.114 "state": "online", 00:20:42.114 "raid_level": "raid1", 00:20:42.114 "superblock": true, 00:20:42.114 "num_base_bdevs": 2, 00:20:42.114 "num_base_bdevs_discovered": 1, 00:20:42.114 "num_base_bdevs_operational": 1, 00:20:42.114 "base_bdevs_list": [ 00:20:42.114 { 00:20:42.114 "name": null, 00:20:42.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.114 "is_configured": false, 00:20:42.114 "data_offset": 0, 00:20:42.114 "data_size": 7936 00:20:42.114 }, 00:20:42.114 { 00:20:42.114 "name": "BaseBdev2", 00:20:42.114 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:42.114 "is_configured": true, 00:20:42.114 "data_offset": 256, 00:20:42.114 "data_size": 7936 00:20:42.114 } 00:20:42.114 ] 00:20:42.114 }' 00:20:42.114 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.114 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:42.373 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:42.373 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:42.373 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:42.373 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:42.373 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:42.373 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.373 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.373 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.373 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:42.373 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.373 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:42.373 "name": "raid_bdev1", 00:20:42.373 "uuid": "637b11ed-9ffa-406c-a917-2f9dd81c7c7e", 00:20:42.373 "strip_size_kb": 0, 00:20:42.373 "state": "online", 00:20:42.373 "raid_level": "raid1", 00:20:42.373 "superblock": true, 00:20:42.373 "num_base_bdevs": 2, 00:20:42.373 "num_base_bdevs_discovered": 1, 00:20:42.373 "num_base_bdevs_operational": 1, 00:20:42.373 "base_bdevs_list": [ 00:20:42.373 { 00:20:42.373 "name": null, 00:20:42.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.373 "is_configured": false, 00:20:42.373 "data_offset": 0, 00:20:42.373 "data_size": 7936 00:20:42.373 }, 00:20:42.373 { 00:20:42.373 "name": "BaseBdev2", 00:20:42.373 "uuid": "a62c41c1-7eae-5eb5-b86e-72ac678da29d", 00:20:42.373 "is_configured": true, 00:20:42.373 "data_offset": 256, 00:20:42.373 "data_size": 7936 00:20:42.373 } 00:20:42.373 ] 00:20:42.373 }' 00:20:42.373 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:42.374 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:42.374 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:42.374 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:42.374 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89104 00:20:42.374 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89104 ']' 00:20:42.374 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89104 00:20:42.374 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:42.374 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.374 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89104 00:20:42.374 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.374 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.374 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89104' 00:20:42.374 killing process with pid 89104 00:20:42.634 Received shutdown signal, test time was about 60.000000 seconds 00:20:42.634 00:20:42.634 Latency(us) 00:20:42.634 [2024-12-05T20:13:44.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.634 [2024-12-05T20:13:44.071Z] =================================================================================================================== 00:20:42.634 [2024-12-05T20:13:44.072Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:42.635 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89104 00:20:42.635 [2024-12-05 20:13:43.809606] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:42.635 [2024-12-05 20:13:43.809717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.635 [2024-12-05 20:13:43.809761] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:42.635 [2024-12-05 20:13:43.809773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:42.635 20:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89104 00:20:42.895 [2024-12-05 20:13:44.090022] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:43.834 20:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:20:43.834 00:20:43.834 real 0m17.278s 00:20:43.834 user 0m22.548s 00:20:43.834 sys 0m1.629s 00:20:43.834 20:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.834 ************************************ 00:20:43.834 END TEST raid_rebuild_test_sb_md_interleaved 00:20:43.834 ************************************ 00:20:43.834 20:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.834 20:13:45 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:20:43.834 20:13:45 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:20:43.834 20:13:45 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89104 ']' 00:20:43.834 20:13:45 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89104 00:20:43.834 20:13:45 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:20:43.834 00:20:43.834 real 11m57.071s 00:20:43.834 user 16m13.907s 00:20:43.834 sys 1m50.321s 00:20:43.834 20:13:45 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.834 20:13:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:43.834 ************************************ 00:20:43.834 END TEST bdev_raid 00:20:43.834 ************************************ 00:20:44.094 20:13:45 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:44.094 20:13:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:44.094 20:13:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.094 20:13:45 -- common/autotest_common.sh@10 -- # set +x 00:20:44.094 ************************************ 00:20:44.094 START TEST spdkcli_raid 00:20:44.094 ************************************ 00:20:44.094 20:13:45 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:44.094 * Looking for test storage... 00:20:44.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:44.094 20:13:45 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:44.094 20:13:45 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:44.094 20:13:45 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:20:44.094 20:13:45 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:44.094 20:13:45 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:20:44.094 20:13:45 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:44.094 20:13:45 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:44.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.094 --rc genhtml_branch_coverage=1 00:20:44.094 --rc genhtml_function_coverage=1 00:20:44.094 --rc genhtml_legend=1 00:20:44.094 --rc geninfo_all_blocks=1 00:20:44.094 --rc geninfo_unexecuted_blocks=1 00:20:44.094 00:20:44.094 ' 00:20:44.094 20:13:45 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:44.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.094 --rc genhtml_branch_coverage=1 00:20:44.094 --rc genhtml_function_coverage=1 00:20:44.094 --rc genhtml_legend=1 00:20:44.094 --rc geninfo_all_blocks=1 00:20:44.094 --rc geninfo_unexecuted_blocks=1 00:20:44.094 00:20:44.094 ' 00:20:44.094 20:13:45 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:44.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.094 --rc genhtml_branch_coverage=1 00:20:44.094 --rc genhtml_function_coverage=1 00:20:44.094 --rc genhtml_legend=1 00:20:44.094 --rc geninfo_all_blocks=1 00:20:44.094 --rc geninfo_unexecuted_blocks=1 00:20:44.094 00:20:44.094 ' 00:20:44.094 20:13:45 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:44.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.094 --rc genhtml_branch_coverage=1 00:20:44.094 --rc genhtml_function_coverage=1 00:20:44.094 --rc genhtml_legend=1 00:20:44.094 --rc geninfo_all_blocks=1 00:20:44.094 --rc geninfo_unexecuted_blocks=1 00:20:44.094 00:20:44.094 ' 00:20:44.094 20:13:45 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:44.353 20:13:45 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:44.353 20:13:45 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:44.353 20:13:45 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:44.353 20:13:45 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:44.353 20:13:45 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:20:44.353 20:13:45 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:20:44.353 20:13:45 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:44.353 20:13:45 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:44.353 20:13:45 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:44.353 20:13:45 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:44.353 20:13:45 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:44.353 20:13:45 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:44.353 20:13:45 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:20:44.353 20:13:45 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:20:44.353 20:13:45 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:44.353 20:13:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:44.353 20:13:45 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:20:44.353 20:13:45 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89786 00:20:44.353 20:13:45 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:20:44.353 20:13:45 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89786 00:20:44.353 20:13:45 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89786 ']' 00:20:44.353 20:13:45 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.353 20:13:45 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.353 20:13:45 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.353 20:13:45 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.353 20:13:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:44.353 [2024-12-05 20:13:45.679317] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:20:44.353 [2024-12-05 20:13:45.679559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89786 ] 00:20:44.612 [2024-12-05 20:13:45.863515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:44.613 [2024-12-05 20:13:45.975042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.613 [2024-12-05 20:13:45.975073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.549 20:13:46 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.549 20:13:46 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:20:45.549 20:13:46 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:20:45.550 20:13:46 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:45.550 20:13:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:45.550 20:13:46 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:20:45.550 20:13:46 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.550 20:13:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:45.550 20:13:46 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:45.550 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:45.550 ' 00:20:46.925 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:20:46.925 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:20:47.184 20:13:48 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:20:47.184 20:13:48 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.184 20:13:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:47.184 20:13:48 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:20:47.184 20:13:48 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.184 20:13:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:47.184 20:13:48 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:20:47.184 ' 00:20:48.562 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:20:48.562 20:13:49 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:20:48.562 20:13:49 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.562 20:13:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:48.562 20:13:49 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:20:48.562 20:13:49 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.562 20:13:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:48.562 20:13:49 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:20:48.562 20:13:49 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:20:48.822 20:13:50 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:20:49.081 20:13:50 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:20:49.081 20:13:50 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:20:49.081 20:13:50 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:49.081 20:13:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:49.081 20:13:50 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:20:49.081 20:13:50 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:49.081 20:13:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:49.081 20:13:50 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:20:49.081 ' 00:20:50.019 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:20:50.019 20:13:51 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:20:50.019 20:13:51 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:50.019 20:13:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:50.277 20:13:51 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:20:50.277 20:13:51 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.277 20:13:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:50.277 20:13:51 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:20:50.277 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:20:50.277 ' 00:20:51.658 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:20:51.658 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:20:51.658 20:13:52 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:20:51.658 20:13:52 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.658 20:13:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:51.658 20:13:53 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89786 00:20:51.658 20:13:53 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89786 ']' 00:20:51.658 20:13:53 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89786 00:20:51.658 20:13:53 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:20:51.658 20:13:53 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.658 20:13:53 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89786 00:20:51.658 killing process with pid 89786 00:20:51.658 20:13:53 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.658 20:13:53 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.658 20:13:53 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89786' 00:20:51.658 20:13:53 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89786 00:20:51.658 20:13:53 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89786 00:20:54.251 20:13:55 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:20:54.251 20:13:55 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89786 ']' 00:20:54.251 20:13:55 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89786 00:20:54.251 20:13:55 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89786 ']' 00:20:54.251 20:13:55 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89786 00:20:54.251 Process with pid 89786 is not found 00:20:54.251 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89786) - No such process 00:20:54.251 20:13:55 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89786 is not found' 00:20:54.251 20:13:55 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:20:54.251 20:13:55 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:20:54.251 20:13:55 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:20:54.251 20:13:55 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:20:54.251 00:20:54.251 real 0m10.035s 00:20:54.251 user 0m20.518s 00:20:54.251 sys 0m1.275s 00:20:54.251 20:13:55 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:54.251 20:13:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:54.251 ************************************ 00:20:54.251 END TEST spdkcli_raid 00:20:54.251 ************************************ 00:20:54.251 20:13:55 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:54.251 20:13:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:54.251 20:13:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:54.251 20:13:55 -- common/autotest_common.sh@10 -- # set +x 00:20:54.251 ************************************ 00:20:54.251 START TEST blockdev_raid5f 00:20:54.251 ************************************ 00:20:54.251 20:13:55 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:54.251 * Looking for test storage... 00:20:54.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:54.251 20:13:55 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:54.251 20:13:55 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:20:54.251 20:13:55 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:54.251 20:13:55 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:54.251 20:13:55 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:20:54.251 20:13:55 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:54.251 20:13:55 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:54.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.251 --rc genhtml_branch_coverage=1 00:20:54.251 --rc genhtml_function_coverage=1 00:20:54.251 --rc genhtml_legend=1 00:20:54.251 --rc geninfo_all_blocks=1 00:20:54.251 --rc geninfo_unexecuted_blocks=1 00:20:54.251 00:20:54.251 ' 00:20:54.251 20:13:55 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:54.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.251 --rc genhtml_branch_coverage=1 00:20:54.251 --rc genhtml_function_coverage=1 00:20:54.251 --rc genhtml_legend=1 00:20:54.251 --rc geninfo_all_blocks=1 00:20:54.251 --rc geninfo_unexecuted_blocks=1 00:20:54.251 00:20:54.251 ' 00:20:54.251 20:13:55 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:54.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.251 --rc genhtml_branch_coverage=1 00:20:54.251 --rc genhtml_function_coverage=1 00:20:54.251 --rc genhtml_legend=1 00:20:54.251 --rc geninfo_all_blocks=1 00:20:54.251 --rc geninfo_unexecuted_blocks=1 00:20:54.251 00:20:54.251 ' 00:20:54.251 20:13:55 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:54.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.251 --rc genhtml_branch_coverage=1 00:20:54.251 --rc genhtml_function_coverage=1 00:20:54.251 --rc genhtml_legend=1 00:20:54.251 --rc geninfo_all_blocks=1 00:20:54.251 --rc geninfo_unexecuted_blocks=1 00:20:54.251 00:20:54.251 ' 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90055 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:54.251 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90055 00:20:54.251 20:13:55 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90055 ']' 00:20:54.251 20:13:55 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.251 20:13:55 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.251 20:13:55 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.251 20:13:55 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.251 20:13:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:54.511 [2024-12-05 20:13:55.773366] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:20:54.511 [2024-12-05 20:13:55.773490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90055 ] 00:20:54.770 [2024-12-05 20:13:55.954624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.770 [2024-12-05 20:13:56.062627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.729 20:13:56 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.729 20:13:56 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:20:55.729 20:13:56 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:20:55.729 20:13:56 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:20:55.729 20:13:56 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:20:55.729 20:13:56 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.729 20:13:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:55.729 Malloc0 00:20:55.729 Malloc1 00:20:55.729 Malloc2 00:20:55.730 20:13:57 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.730 20:13:57 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:20:55.730 20:13:57 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.730 20:13:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:55.730 20:13:57 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.730 20:13:57 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:20:55.730 20:13:57 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:20:55.730 20:13:57 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.730 20:13:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:55.730 20:13:57 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.730 20:13:57 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:20:55.730 20:13:57 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.730 20:13:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:55.730 20:13:57 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.730 20:13:57 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:55.730 20:13:57 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.730 20:13:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:55.730 20:13:57 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.730 20:13:57 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:20:55.730 20:13:57 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:20:55.730 20:13:57 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:20:55.730 20:13:57 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.730 20:13:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:55.730 20:13:57 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.730 20:13:57 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:20:55.730 20:13:57 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:20:55.730 20:13:57 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "b619e40a-26a5-48e4-baa4-44cd09171570"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b619e40a-26a5-48e4-baa4-44cd09171570",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "b619e40a-26a5-48e4-baa4-44cd09171570",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "58d1fabb-d614-4385-b275-6449cae66674",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3e3793cf-615b-43c9-bba9-48af99ed3d1c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b5e39652-c0a4-409f-97ab-ddb8af3019cc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:55.990 20:13:57 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:20:55.990 20:13:57 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:20:55.990 20:13:57 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:20:55.990 20:13:57 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90055 00:20:55.990 20:13:57 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90055 ']' 00:20:55.990 20:13:57 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90055 00:20:55.990 20:13:57 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:20:55.990 20:13:57 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.990 20:13:57 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90055 00:20:55.990 killing process with pid 90055 00:20:55.990 20:13:57 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:55.990 20:13:57 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:55.990 20:13:57 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90055' 00:20:55.990 20:13:57 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90055 00:20:55.990 20:13:57 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90055 00:20:58.532 20:13:59 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:58.532 20:13:59 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:58.532 20:13:59 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:58.532 20:13:59 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:58.532 20:13:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:58.532 ************************************ 00:20:58.532 START TEST bdev_hello_world 00:20:58.532 ************************************ 00:20:58.532 20:13:59 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:58.532 [2024-12-05 20:13:59.844869] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:20:58.532 [2024-12-05 20:13:59.844986] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90122 ] 00:20:58.793 [2024-12-05 20:14:00.024177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.793 [2024-12-05 20:14:00.133951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.363 [2024-12-05 20:14:00.663974] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:59.363 [2024-12-05 20:14:00.664022] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:20:59.363 [2024-12-05 20:14:00.664039] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:59.363 [2024-12-05 20:14:00.664477] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:59.363 [2024-12-05 20:14:00.664623] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:59.363 [2024-12-05 20:14:00.664640] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:59.363 [2024-12-05 20:14:00.664684] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:59.363 00:20:59.363 [2024-12-05 20:14:00.664700] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:00.745 00:21:00.745 real 0m2.229s 00:21:00.745 user 0m1.851s 00:21:00.745 sys 0m0.254s 00:21:00.745 20:14:01 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.745 20:14:01 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:00.745 ************************************ 00:21:00.745 END TEST bdev_hello_world 00:21:00.745 ************************************ 00:21:00.745 20:14:02 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:21:00.745 20:14:02 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:00.745 20:14:02 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.745 20:14:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:00.745 ************************************ 00:21:00.745 START TEST bdev_bounds 00:21:00.745 ************************************ 00:21:00.745 20:14:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:21:00.745 20:14:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90170 00:21:00.745 20:14:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:00.745 20:14:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:00.745 20:14:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90170' 00:21:00.745 Process bdevio pid: 90170 00:21:00.745 20:14:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90170 00:21:00.745 20:14:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90170 ']' 00:21:00.745 20:14:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.745 20:14:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.745 20:14:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.745 20:14:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.745 20:14:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:00.745 [2024-12-05 20:14:02.144542] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:00.745 [2024-12-05 20:14:02.144668] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90170 ] 00:21:01.005 [2024-12-05 20:14:02.324257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:01.005 [2024-12-05 20:14:02.433768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.005 [2024-12-05 20:14:02.433986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.005 [2024-12-05 20:14:02.434037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.576 20:14:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.576 20:14:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:21:01.576 20:14:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:01.836 I/O targets: 00:21:01.836 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:21:01.836 00:21:01.836 00:21:01.836 CUnit - A unit testing framework for C - Version 2.1-3 00:21:01.836 http://cunit.sourceforge.net/ 00:21:01.836 00:21:01.836 00:21:01.836 Suite: bdevio tests on: raid5f 00:21:01.836 Test: blockdev write read block ...passed 00:21:01.836 Test: blockdev write zeroes read block ...passed 00:21:01.836 Test: blockdev write zeroes read no split ...passed 00:21:01.836 Test: blockdev write zeroes read split ...passed 00:21:02.097 Test: blockdev write zeroes read split partial ...passed 00:21:02.097 Test: blockdev reset ...passed 00:21:02.097 Test: blockdev write read 8 blocks ...passed 00:21:02.097 Test: blockdev write read size > 128k ...passed 00:21:02.097 Test: blockdev write read invalid size ...passed 00:21:02.097 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:02.097 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:02.097 Test: blockdev write read max offset ...passed 00:21:02.097 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:02.097 Test: blockdev writev readv 8 blocks ...passed 00:21:02.097 Test: blockdev writev readv 30 x 1block ...passed 00:21:02.097 Test: blockdev writev readv block ...passed 00:21:02.097 Test: blockdev writev readv size > 128k ...passed 00:21:02.097 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:02.097 Test: blockdev comparev and writev ...passed 00:21:02.097 Test: blockdev nvme passthru rw ...passed 00:21:02.097 Test: blockdev nvme passthru vendor specific ...passed 00:21:02.097 Test: blockdev nvme admin passthru ...passed 00:21:02.097 Test: blockdev copy ...passed 00:21:02.097 00:21:02.097 Run Summary: Type Total Ran Passed Failed Inactive 00:21:02.097 suites 1 1 n/a 0 0 00:21:02.097 tests 23 23 23 0 0 00:21:02.097 asserts 130 130 130 0 n/a 00:21:02.097 00:21:02.097 Elapsed time = 0.592 seconds 00:21:02.097 0 00:21:02.097 20:14:03 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90170 00:21:02.097 20:14:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90170 ']' 00:21:02.097 20:14:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90170 00:21:02.097 20:14:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:21:02.097 20:14:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.097 20:14:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90170 00:21:02.097 20:14:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:02.097 20:14:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:02.097 20:14:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90170' 00:21:02.097 killing process with pid 90170 00:21:02.097 20:14:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90170 00:21:02.097 20:14:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90170 00:21:03.479 20:14:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:21:03.479 00:21:03.479 real 0m2.648s 00:21:03.479 user 0m6.482s 00:21:03.479 sys 0m0.410s 00:21:03.479 ************************************ 00:21:03.479 END TEST bdev_bounds 00:21:03.479 ************************************ 00:21:03.479 20:14:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:03.479 20:14:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:03.479 20:14:04 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:03.479 20:14:04 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:03.479 20:14:04 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:03.479 20:14:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:03.479 ************************************ 00:21:03.479 START TEST bdev_nbd 00:21:03.479 ************************************ 00:21:03.479 20:14:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:03.479 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:21:03.479 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:21:03.479 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:03.479 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:03.479 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:21:03.479 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:21:03.479 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:21:03.479 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:21:03.479 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:03.479 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:21:03.479 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:21:03.479 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:21:03.479 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:21:03.479 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:21:03.479 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:21:03.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:03.480 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90224 00:21:03.480 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:03.480 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:21:03.480 20:14:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90224 /var/tmp/spdk-nbd.sock 00:21:03.480 20:14:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90224 ']' 00:21:03.480 20:14:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:03.480 20:14:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.480 20:14:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:03.480 20:14:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.480 20:14:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:03.480 [2024-12-05 20:14:04.881807] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:03.480 [2024-12-05 20:14:04.882083] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.739 [2024-12-05 20:14:05.058814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.739 [2024-12-05 20:14:05.164673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.307 20:14:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.307 20:14:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:21:04.307 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:21:04.307 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:04.307 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:21:04.307 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:04.307 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:21:04.307 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:04.307 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:21:04.307 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:04.307 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:21:04.307 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:04.307 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:04.307 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:04.307 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:04.565 1+0 records in 00:21:04.565 1+0 records out 00:21:04.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048375 s, 8.5 MB/s 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:04.565 20:14:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:04.823 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:21:04.823 { 00:21:04.823 "nbd_device": "/dev/nbd0", 00:21:04.823 "bdev_name": "raid5f" 00:21:04.823 } 00:21:04.823 ]' 00:21:04.823 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:21:04.823 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:21:04.823 { 00:21:04.823 "nbd_device": "/dev/nbd0", 00:21:04.823 "bdev_name": "raid5f" 00:21:04.823 } 00:21:04.823 ]' 00:21:04.823 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:05.084 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:05.343 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:05.343 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:05.343 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:05.343 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:05.343 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:05.343 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:05.343 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:05.343 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:05.343 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:05.343 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:21:05.343 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:21:05.343 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:21:05.344 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:05.344 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:05.344 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:21:05.344 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:05.344 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:21:05.344 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:05.344 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:05.344 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:05.344 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:21:05.344 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:05.344 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:05.603 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:05.603 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:21:05.603 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:05.603 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:05.603 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:21:05.603 /dev/nbd0 00:21:05.603 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:05.603 20:14:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:05.603 20:14:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:05.603 20:14:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:05.603 20:14:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:05.603 20:14:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:05.603 20:14:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:05.603 20:14:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:05.603 20:14:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:05.603 20:14:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:05.603 20:14:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:05.603 1+0 records in 00:21:05.603 1+0 records out 00:21:05.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427693 s, 9.6 MB/s 00:21:05.603 20:14:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.603 20:14:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:05.603 20:14:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.603 20:14:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:05.603 20:14:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:05.603 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:05.604 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:05.604 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:05.604 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:05.604 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:05.863 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:05.863 { 00:21:05.863 "nbd_device": "/dev/nbd0", 00:21:05.863 "bdev_name": "raid5f" 00:21:05.863 } 00:21:05.863 ]' 00:21:05.863 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:05.863 { 00:21:05.863 "nbd_device": "/dev/nbd0", 00:21:05.863 "bdev_name": "raid5f" 00:21:05.863 } 00:21:05.863 ]' 00:21:05.863 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:05.863 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:21:05.863 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:21:05.863 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:05.863 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:21:05.863 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:21:05.863 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:21:05.863 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:21:05.863 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:21:05.863 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:05.863 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:05.863 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:05.863 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:05.863 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:05.863 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:21:06.123 256+0 records in 00:21:06.123 256+0 records out 00:21:06.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137493 s, 76.3 MB/s 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:06.123 256+0 records in 00:21:06.123 256+0 records out 00:21:06.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300318 s, 34.9 MB/s 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:06.123 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:06.383 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:06.383 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:06.383 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:06.383 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:06.383 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:06.383 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:06.383 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:06.383 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:06.383 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:06.383 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:06.383 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:06.383 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:06.383 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:06.383 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:06.643 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:06.643 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:06.643 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:06.643 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:06.643 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:06.643 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:06.643 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:21:06.643 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:06.643 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:21:06.643 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:06.643 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:06.643 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:21:06.643 20:14:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:21:06.643 malloc_lvol_verify 00:21:06.643 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:21:06.903 a0bb6e1b-9a40-4e37-94f2-e1e28df05812 00:21:06.903 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:21:07.163 973a7d6b-63ba-4b71-8e78-471cee5e58a9 00:21:07.163 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:21:07.423 /dev/nbd0 00:21:07.423 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:21:07.423 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:21:07.423 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:21:07.423 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:21:07.423 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:21:07.423 mke2fs 1.47.0 (5-Feb-2023) 00:21:07.423 Discarding device blocks: 0/4096 done 00:21:07.423 Creating filesystem with 4096 1k blocks and 1024 inodes 00:21:07.423 00:21:07.423 Allocating group tables: 0/1 done 00:21:07.423 Writing inode tables: 0/1 done 00:21:07.423 Creating journal (1024 blocks): done 00:21:07.423 Writing superblocks and filesystem accounting information: 0/1 done 00:21:07.423 00:21:07.423 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:07.423 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:07.423 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:07.423 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:07.423 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:07.423 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:07.423 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90224 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90224 ']' 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90224 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90224 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:07.684 killing process with pid 90224 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90224' 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90224 00:21:07.684 20:14:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90224 00:21:09.068 20:14:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:09.068 00:21:09.068 real 0m5.542s 00:21:09.068 user 0m7.447s 00:21:09.068 sys 0m1.372s 00:21:09.068 20:14:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:09.068 20:14:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:09.068 ************************************ 00:21:09.068 END TEST bdev_nbd 00:21:09.068 ************************************ 00:21:09.068 20:14:10 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:21:09.068 20:14:10 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:21:09.068 20:14:10 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:21:09.068 20:14:10 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:21:09.068 20:14:10 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:09.068 20:14:10 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:09.068 20:14:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:09.068 ************************************ 00:21:09.068 START TEST bdev_fio 00:21:09.068 ************************************ 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:21:09.068 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:21:09.068 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:21:09.328 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:21:09.328 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:21:09.328 20:14:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:09.328 20:14:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:21:09.328 20:14:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:21:09.328 20:14:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:21:09.328 20:14:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:09.328 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:21:09.328 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:09.329 ************************************ 00:21:09.329 START TEST bdev_fio_rw_verify 00:21:09.329 ************************************ 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:09.329 20:14:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:09.588 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:09.588 fio-3.35 00:21:09.588 Starting 1 thread 00:21:21.805 00:21:21.805 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90428: Thu Dec 5 20:14:21 2024 00:21:21.805 read: IOPS=12.4k, BW=48.6MiB/s (51.0MB/s)(486MiB/10001msec) 00:21:21.805 slat (nsec): min=17798, max=86277, avg=19521.25, stdev=1652.97 00:21:21.805 clat (usec): min=10, max=287, avg=130.07, stdev=45.91 00:21:21.805 lat (usec): min=30, max=308, avg=149.59, stdev=46.07 00:21:21.805 clat percentiles (usec): 00:21:21.805 | 50.000th=[ 135], 99.000th=[ 210], 99.900th=[ 233], 99.990th=[ 260], 00:21:21.805 | 99.999th=[ 281] 00:21:21.805 write: IOPS=13.0k, BW=50.8MiB/s (53.3MB/s)(502MiB/9875msec); 0 zone resets 00:21:21.805 slat (usec): min=7, max=173, avg=15.82, stdev= 3.28 00:21:21.805 clat (usec): min=58, max=1078, avg=296.08, stdev=37.52 00:21:21.805 lat (usec): min=73, max=1251, avg=311.90, stdev=38.32 00:21:21.805 clat percentiles (usec): 00:21:21.805 | 50.000th=[ 302], 99.000th=[ 367], 99.900th=[ 537], 99.990th=[ 947], 00:21:21.805 | 99.999th=[ 1012] 00:21:21.805 bw ( KiB/s): min=48672, max=54608, per=98.87%, avg=51460.74, stdev=1409.96, samples=19 00:21:21.805 iops : min=12168, max=13652, avg=12865.16, stdev=352.50, samples=19 00:21:21.805 lat (usec) : 20=0.01%, 50=0.01%, 100=16.66%, 250=39.05%, 500=44.22% 00:21:21.805 lat (usec) : 750=0.04%, 1000=0.02% 00:21:21.805 lat (msec) : 2=0.01% 00:21:21.805 cpu : usr=98.88%, sys=0.45%, ctx=34, majf=0, minf=10160 00:21:21.805 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:21.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.805 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.806 issued rwts: total=124427,128491,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.806 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:21.806 00:21:21.806 Run status group 0 (all jobs): 00:21:21.806 READ: bw=48.6MiB/s (51.0MB/s), 48.6MiB/s-48.6MiB/s (51.0MB/s-51.0MB/s), io=486MiB (510MB), run=10001-10001msec 00:21:21.806 WRITE: bw=50.8MiB/s (53.3MB/s), 50.8MiB/s-50.8MiB/s (53.3MB/s-53.3MB/s), io=502MiB (526MB), run=9875-9875msec 00:21:22.066 ----------------------------------------------------- 00:21:22.066 Suppressions used: 00:21:22.066 count bytes template 00:21:22.066 1 7 /usr/src/fio/parse.c 00:21:22.066 217 20832 /usr/src/fio/iolog.c 00:21:22.066 1 8 libtcmalloc_minimal.so 00:21:22.066 1 904 libcrypto.so 00:21:22.066 ----------------------------------------------------- 00:21:22.066 00:21:22.066 00:21:22.066 real 0m12.779s 00:21:22.066 user 0m13.053s 00:21:22.066 sys 0m0.759s 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:22.066 ************************************ 00:21:22.066 END TEST bdev_fio_rw_verify 00:21:22.066 ************************************ 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:21:22.066 20:14:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:22.067 20:14:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "b619e40a-26a5-48e4-baa4-44cd09171570"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b619e40a-26a5-48e4-baa4-44cd09171570",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "b619e40a-26a5-48e4-baa4-44cd09171570",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "58d1fabb-d614-4385-b275-6449cae66674",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3e3793cf-615b-43c9-bba9-48af99ed3d1c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b5e39652-c0a4-409f-97ab-ddb8af3019cc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:22.067 20:14:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:22.067 20:14:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:22.067 /home/vagrant/spdk_repo/spdk 00:21:22.067 20:14:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:22.067 20:14:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:22.067 20:14:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:22.067 00:21:22.067 real 0m13.081s 00:21:22.067 user 0m13.181s 00:21:22.067 sys 0m0.905s 00:21:22.067 20:14:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.067 20:14:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:22.067 ************************************ 00:21:22.067 END TEST bdev_fio 00:21:22.067 ************************************ 00:21:22.328 20:14:23 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:22.328 20:14:23 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:22.328 20:14:23 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:22.328 20:14:23 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.328 20:14:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:22.328 ************************************ 00:21:22.328 START TEST bdev_verify 00:21:22.328 ************************************ 00:21:22.328 20:14:23 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:22.328 [2024-12-05 20:14:23.640182] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:22.328 [2024-12-05 20:14:23.640303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90591 ] 00:21:22.588 [2024-12-05 20:14:23.820849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:22.588 [2024-12-05 20:14:23.925640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.588 [2024-12-05 20:14:23.925689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.165 Running I/O for 5 seconds... 00:21:25.477 10505.00 IOPS, 41.04 MiB/s [2024-12-05T20:14:27.853Z] 10616.00 IOPS, 41.47 MiB/s [2024-12-05T20:14:28.794Z] 10620.67 IOPS, 41.49 MiB/s [2024-12-05T20:14:29.734Z] 10619.00 IOPS, 41.48 MiB/s [2024-12-05T20:14:29.734Z] 10621.20 IOPS, 41.49 MiB/s 00:21:28.297 Latency(us) 00:21:28.297 [2024-12-05T20:14:29.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.297 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:28.297 Verification LBA range: start 0x0 length 0x2000 00:21:28.297 raid5f : 5.02 6508.36 25.42 0.00 0.00 29645.11 222.69 21177.57 00:21:28.297 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:28.297 Verification LBA range: start 0x2000 length 0x2000 00:21:28.297 raid5f : 5.02 4118.27 16.09 0.00 0.00 46836.20 236.10 33884.12 00:21:28.297 [2024-12-05T20:14:29.734Z] =================================================================================================================== 00:21:28.297 [2024-12-05T20:14:29.734Z] Total : 10626.63 41.51 0.00 0.00 36311.24 222.69 33884.12 00:21:29.678 00:21:29.678 real 0m7.444s 00:21:29.678 user 0m13.751s 00:21:29.678 sys 0m0.291s 00:21:29.678 20:14:30 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:29.678 20:14:30 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:29.678 ************************************ 00:21:29.678 END TEST bdev_verify 00:21:29.678 ************************************ 00:21:29.678 20:14:31 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:29.678 20:14:31 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:29.678 20:14:31 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:29.678 20:14:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:29.678 ************************************ 00:21:29.678 START TEST bdev_verify_big_io 00:21:29.678 ************************************ 00:21:29.678 20:14:31 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:29.938 [2024-12-05 20:14:31.152129] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:29.938 [2024-12-05 20:14:31.152236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90690 ] 00:21:29.938 [2024-12-05 20:14:31.326263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:30.198 [2024-12-05 20:14:31.461208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.198 [2024-12-05 20:14:31.461235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.768 Running I/O for 5 seconds... 00:21:33.126 633.00 IOPS, 39.56 MiB/s [2024-12-05T20:14:35.131Z] 728.50 IOPS, 45.53 MiB/s [2024-12-05T20:14:36.506Z] 760.67 IOPS, 47.54 MiB/s [2024-12-05T20:14:37.441Z] 760.75 IOPS, 47.55 MiB/s [2024-12-05T20:14:37.441Z] 761.60 IOPS, 47.60 MiB/s 00:21:36.004 Latency(us) 00:21:36.004 [2024-12-05T20:14:37.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.004 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:36.004 Verification LBA range: start 0x0 length 0x200 00:21:36.004 raid5f : 5.22 438.19 27.39 0.00 0.00 7328450.76 188.70 322356.99 00:21:36.004 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:36.004 Verification LBA range: start 0x200 length 0x200 00:21:36.004 raid5f : 5.28 348.55 21.78 0.00 0.00 9044848.20 199.43 397451.51 00:21:36.004 [2024-12-05T20:14:37.441Z] =================================================================================================================== 00:21:36.004 [2024-12-05T20:14:37.441Z] Total : 786.73 49.17 0.00 0.00 8093882.48 188.70 397451.51 00:21:37.907 00:21:37.907 real 0m7.756s 00:21:37.907 user 0m14.268s 00:21:37.907 sys 0m0.384s 00:21:37.907 20:14:38 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:37.907 20:14:38 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:37.907 ************************************ 00:21:37.907 END TEST bdev_verify_big_io 00:21:37.907 ************************************ 00:21:37.907 20:14:38 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:37.907 20:14:38 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:37.907 20:14:38 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.907 20:14:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:37.907 ************************************ 00:21:37.907 START TEST bdev_write_zeroes 00:21:37.907 ************************************ 00:21:37.907 20:14:38 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:37.907 [2024-12-05 20:14:38.996443] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:37.907 [2024-12-05 20:14:38.996575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90788 ] 00:21:37.907 [2024-12-05 20:14:39.179193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.907 [2024-12-05 20:14:39.316035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.845 Running I/O for 1 seconds... 00:21:39.783 28503.00 IOPS, 111.34 MiB/s 00:21:39.783 Latency(us) 00:21:39.783 [2024-12-05T20:14:41.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.783 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:39.783 raid5f : 1.01 28481.07 111.25 0.00 0.00 4480.78 1595.47 6152.94 00:21:39.783 [2024-12-05T20:14:41.220Z] =================================================================================================================== 00:21:39.783 [2024-12-05T20:14:41.220Z] Total : 28481.07 111.25 0.00 0.00 4480.78 1595.47 6152.94 00:21:41.164 00:21:41.164 real 0m3.518s 00:21:41.164 user 0m2.987s 00:21:41.164 sys 0m0.402s 00:21:41.164 20:14:42 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:41.164 20:14:42 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:41.164 ************************************ 00:21:41.164 END TEST bdev_write_zeroes 00:21:41.164 ************************************ 00:21:41.164 20:14:42 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:41.164 20:14:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:41.164 20:14:42 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:41.164 20:14:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:41.164 ************************************ 00:21:41.164 START TEST bdev_json_nonenclosed 00:21:41.164 ************************************ 00:21:41.164 20:14:42 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:41.164 [2024-12-05 20:14:42.571682] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:41.164 [2024-12-05 20:14:42.571784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90848 ] 00:21:41.424 [2024-12-05 20:14:42.743522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.684 [2024-12-05 20:14:42.879037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.684 [2024-12-05 20:14:42.879148] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:41.684 [2024-12-05 20:14:42.879180] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:41.684 [2024-12-05 20:14:42.879193] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:41.945 00:21:41.945 real 0m0.657s 00:21:41.945 user 0m0.404s 00:21:41.945 sys 0m0.148s 00:21:41.945 20:14:43 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:41.945 20:14:43 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:41.945 ************************************ 00:21:41.945 END TEST bdev_json_nonenclosed 00:21:41.945 ************************************ 00:21:41.945 20:14:43 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:41.945 20:14:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:41.945 20:14:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:41.945 20:14:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:41.945 ************************************ 00:21:41.945 START TEST bdev_json_nonarray 00:21:41.945 ************************************ 00:21:41.945 20:14:43 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:41.945 [2024-12-05 20:14:43.310393] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:41.945 [2024-12-05 20:14:43.310587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90878 ] 00:21:42.206 [2024-12-05 20:14:43.490311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.206 [2024-12-05 20:14:43.622789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.206 [2024-12-05 20:14:43.622942] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:42.206 [2024-12-05 20:14:43.622966] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:42.206 [2024-12-05 20:14:43.622994] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:42.467 00:21:42.467 real 0m0.669s 00:21:42.467 user 0m0.407s 00:21:42.467 sys 0m0.156s 00:21:42.467 20:14:43 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:42.467 20:14:43 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:42.467 ************************************ 00:21:42.467 END TEST bdev_json_nonarray 00:21:42.467 ************************************ 00:21:42.727 20:14:43 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:21:42.728 20:14:43 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:21:42.728 20:14:43 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:21:42.728 20:14:43 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:21:42.728 20:14:43 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:21:42.728 20:14:43 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:42.728 20:14:43 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:42.728 20:14:43 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:21:42.728 20:14:43 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:21:42.728 20:14:43 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:21:42.728 20:14:43 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:21:42.728 00:21:42.728 real 0m48.552s 00:21:42.728 user 1m5.133s 00:21:42.728 sys 0m5.498s 00:21:42.728 20:14:43 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:42.728 20:14:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:42.728 ************************************ 00:21:42.728 END TEST blockdev_raid5f 00:21:42.728 ************************************ 00:21:42.728 20:14:44 -- spdk/autotest.sh@194 -- # uname -s 00:21:42.728 20:14:44 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:21:42.728 20:14:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:42.728 20:14:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:42.728 20:14:44 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:21:42.728 20:14:44 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:21:42.728 20:14:44 -- spdk/autotest.sh@260 -- # timing_exit lib 00:21:42.728 20:14:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:42.728 20:14:44 -- common/autotest_common.sh@10 -- # set +x 00:21:42.728 20:14:44 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:21:42.728 20:14:44 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:21:42.728 20:14:44 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:21:42.728 20:14:44 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:42.728 20:14:44 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:42.728 20:14:44 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:42.728 20:14:44 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:42.728 20:14:44 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:42.728 20:14:44 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:42.728 20:14:44 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:42.728 20:14:44 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:42.728 20:14:44 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:42.728 20:14:44 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:42.728 20:14:44 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:42.728 20:14:44 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:42.728 20:14:44 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:42.728 20:14:44 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:42.728 20:14:44 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:21:42.728 20:14:44 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:21:42.728 20:14:44 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:21:42.728 20:14:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:42.728 20:14:44 -- common/autotest_common.sh@10 -- # set +x 00:21:42.728 20:14:44 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:21:42.728 20:14:44 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:21:42.728 20:14:44 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:42.728 20:14:44 -- common/autotest_common.sh@10 -- # set +x 00:21:45.268 INFO: APP EXITING 00:21:45.268 INFO: killing all VMs 00:21:45.268 INFO: killing vhost app 00:21:45.268 INFO: EXIT DONE 00:21:45.528 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:45.788 Waiting for block devices as requested 00:21:45.788 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:45.788 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:46.729 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:46.729 Cleaning 00:21:46.729 Removing: /var/run/dpdk/spdk0/config 00:21:46.729 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:46.729 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:46.729 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:46.729 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:46.729 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:46.729 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:46.729 Removing: /dev/shm/spdk_tgt_trace.pid57027 00:21:46.729 Removing: /var/run/dpdk/spdk0 00:21:46.729 Removing: /var/run/dpdk/spdk_pid56792 00:21:46.729 Removing: /var/run/dpdk/spdk_pid57027 00:21:46.729 Removing: /var/run/dpdk/spdk_pid57262 00:21:46.729 Removing: /var/run/dpdk/spdk_pid57366 00:21:46.729 Removing: /var/run/dpdk/spdk_pid57422 00:21:46.729 Removing: /var/run/dpdk/spdk_pid57550 00:21:46.989 Removing: /var/run/dpdk/spdk_pid57574 00:21:46.989 Removing: /var/run/dpdk/spdk_pid57784 00:21:46.989 Removing: /var/run/dpdk/spdk_pid57895 00:21:46.989 Removing: /var/run/dpdk/spdk_pid58002 00:21:46.989 Removing: /var/run/dpdk/spdk_pid58124 00:21:46.989 Removing: /var/run/dpdk/spdk_pid58233 00:21:46.989 Removing: /var/run/dpdk/spdk_pid58272 00:21:46.989 Removing: /var/run/dpdk/spdk_pid58314 00:21:46.989 Removing: /var/run/dpdk/spdk_pid58390 00:21:46.989 Removing: /var/run/dpdk/spdk_pid58518 00:21:46.989 Removing: /var/run/dpdk/spdk_pid58974 00:21:46.989 Removing: /var/run/dpdk/spdk_pid59049 00:21:46.989 Removing: /var/run/dpdk/spdk_pid59123 00:21:46.989 Removing: /var/run/dpdk/spdk_pid59144 00:21:46.989 Removing: /var/run/dpdk/spdk_pid59301 00:21:46.989 Removing: /var/run/dpdk/spdk_pid59318 00:21:46.989 Removing: /var/run/dpdk/spdk_pid59466 00:21:46.989 Removing: /var/run/dpdk/spdk_pid59482 00:21:46.989 Removing: /var/run/dpdk/spdk_pid59552 00:21:46.989 Removing: /var/run/dpdk/spdk_pid59575 00:21:46.989 Removing: /var/run/dpdk/spdk_pid59639 00:21:46.989 Removing: /var/run/dpdk/spdk_pid59663 00:21:46.989 Removing: /var/run/dpdk/spdk_pid59859 00:21:46.989 Removing: /var/run/dpdk/spdk_pid59900 00:21:46.989 Removing: /var/run/dpdk/spdk_pid59989 00:21:46.989 Removing: /var/run/dpdk/spdk_pid61330 00:21:46.989 Removing: /var/run/dpdk/spdk_pid61537 00:21:46.989 Removing: /var/run/dpdk/spdk_pid61683 00:21:46.989 Removing: /var/run/dpdk/spdk_pid62321 00:21:46.989 Removing: /var/run/dpdk/spdk_pid62532 00:21:46.989 Removing: /var/run/dpdk/spdk_pid62678 00:21:46.989 Removing: /var/run/dpdk/spdk_pid63321 00:21:46.989 Removing: /var/run/dpdk/spdk_pid63646 00:21:46.989 Removing: /var/run/dpdk/spdk_pid63796 00:21:46.989 Removing: /var/run/dpdk/spdk_pid65183 00:21:46.989 Removing: /var/run/dpdk/spdk_pid65436 00:21:46.989 Removing: /var/run/dpdk/spdk_pid65582 00:21:46.989 Removing: /var/run/dpdk/spdk_pid66978 00:21:46.989 Removing: /var/run/dpdk/spdk_pid67237 00:21:46.990 Removing: /var/run/dpdk/spdk_pid67381 00:21:46.990 Removing: /var/run/dpdk/spdk_pid68773 00:21:46.990 Removing: /var/run/dpdk/spdk_pid69213 00:21:46.990 Removing: /var/run/dpdk/spdk_pid69361 00:21:46.990 Removing: /var/run/dpdk/spdk_pid70849 00:21:46.990 Removing: /var/run/dpdk/spdk_pid71114 00:21:46.990 Removing: /var/run/dpdk/spdk_pid71265 00:21:46.990 Removing: /var/run/dpdk/spdk_pid72759 00:21:46.990 Removing: /var/run/dpdk/spdk_pid73029 00:21:46.990 Removing: /var/run/dpdk/spdk_pid73175 00:21:46.990 Removing: /var/run/dpdk/spdk_pid74653 00:21:46.990 Removing: /var/run/dpdk/spdk_pid75144 00:21:46.990 Removing: /var/run/dpdk/spdk_pid75290 00:21:46.990 Removing: /var/run/dpdk/spdk_pid75433 00:21:46.990 Removing: /var/run/dpdk/spdk_pid75857 00:21:46.990 Removing: /var/run/dpdk/spdk_pid76587 00:21:46.990 Removing: /var/run/dpdk/spdk_pid76963 00:21:47.251 Removing: /var/run/dpdk/spdk_pid77646 00:21:47.251 Removing: /var/run/dpdk/spdk_pid78093 00:21:47.251 Removing: /var/run/dpdk/spdk_pid78844 00:21:47.251 Removing: /var/run/dpdk/spdk_pid79253 00:21:47.251 Removing: /var/run/dpdk/spdk_pid81219 00:21:47.251 Removing: /var/run/dpdk/spdk_pid81658 00:21:47.251 Removing: /var/run/dpdk/spdk_pid82099 00:21:47.251 Removing: /var/run/dpdk/spdk_pid84187 00:21:47.251 Removing: /var/run/dpdk/spdk_pid84669 00:21:47.251 Removing: /var/run/dpdk/spdk_pid85191 00:21:47.251 Removing: /var/run/dpdk/spdk_pid86251 00:21:47.251 Removing: /var/run/dpdk/spdk_pid86574 00:21:47.251 Removing: /var/run/dpdk/spdk_pid87513 00:21:47.251 Removing: /var/run/dpdk/spdk_pid87843 00:21:47.251 Removing: /var/run/dpdk/spdk_pid88781 00:21:47.251 Removing: /var/run/dpdk/spdk_pid89104 00:21:47.251 Removing: /var/run/dpdk/spdk_pid89786 00:21:47.251 Removing: /var/run/dpdk/spdk_pid90055 00:21:47.251 Removing: /var/run/dpdk/spdk_pid90122 00:21:47.251 Removing: /var/run/dpdk/spdk_pid90170 00:21:47.251 Removing: /var/run/dpdk/spdk_pid90413 00:21:47.251 Removing: /var/run/dpdk/spdk_pid90591 00:21:47.251 Removing: /var/run/dpdk/spdk_pid90690 00:21:47.251 Removing: /var/run/dpdk/spdk_pid90788 00:21:47.251 Removing: /var/run/dpdk/spdk_pid90848 00:21:47.251 Removing: /var/run/dpdk/spdk_pid90878 00:21:47.251 Clean 00:21:47.251 20:14:48 -- common/autotest_common.sh@1453 -- # return 0 00:21:47.251 20:14:48 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:47.251 20:14:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.251 20:14:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.251 20:14:48 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:47.251 20:14:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.251 20:14:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.511 20:14:48 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:47.511 20:14:48 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:47.511 20:14:48 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:47.511 20:14:48 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:47.511 20:14:48 -- spdk/autotest.sh@398 -- # hostname 00:21:47.511 20:14:48 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:47.511 geninfo: WARNING: invalid characters removed from testname! 00:22:14.073 20:15:11 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:14.073 20:15:14 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:15.011 20:15:16 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:16.918 20:15:18 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:19.458 20:15:20 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:21.363 20:15:22 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:23.268 20:15:24 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:23.268 20:15:24 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:23.268 20:15:24 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:23.268 20:15:24 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:23.268 20:15:24 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:23.268 20:15:24 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:23.268 + [[ -n 5442 ]] 00:22:23.268 + sudo kill 5442 00:22:23.277 [Pipeline] } 00:22:23.293 [Pipeline] // timeout 00:22:23.298 [Pipeline] } 00:22:23.312 [Pipeline] // stage 00:22:23.317 [Pipeline] } 00:22:23.331 [Pipeline] // catchError 00:22:23.340 [Pipeline] stage 00:22:23.342 [Pipeline] { (Stop VM) 00:22:23.353 [Pipeline] sh 00:22:23.636 + vagrant halt 00:22:26.239 ==> default: Halting domain... 00:22:34.381 [Pipeline] sh 00:22:34.662 + vagrant destroy -f 00:22:37.199 ==> default: Removing domain... 00:22:37.213 [Pipeline] sh 00:22:37.504 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:22:37.513 [Pipeline] } 00:22:37.528 [Pipeline] // stage 00:22:37.533 [Pipeline] } 00:22:37.547 [Pipeline] // dir 00:22:37.552 [Pipeline] } 00:22:37.566 [Pipeline] // wrap 00:22:37.572 [Pipeline] } 00:22:37.584 [Pipeline] // catchError 00:22:37.593 [Pipeline] stage 00:22:37.595 [Pipeline] { (Epilogue) 00:22:37.607 [Pipeline] sh 00:22:37.893 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:42.120 [Pipeline] catchError 00:22:42.123 [Pipeline] { 00:22:42.135 [Pipeline] sh 00:22:42.419 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:42.419 Artifacts sizes are good 00:22:42.429 [Pipeline] } 00:22:42.443 [Pipeline] // catchError 00:22:42.455 [Pipeline] archiveArtifacts 00:22:42.462 Archiving artifacts 00:22:42.567 [Pipeline] cleanWs 00:22:42.578 [WS-CLEANUP] Deleting project workspace... 00:22:42.578 [WS-CLEANUP] Deferred wipeout is used... 00:22:42.585 [WS-CLEANUP] done 00:22:42.587 [Pipeline] } 00:22:42.602 [Pipeline] // stage 00:22:42.608 [Pipeline] } 00:22:42.621 [Pipeline] // node 00:22:42.626 [Pipeline] End of Pipeline 00:22:42.663 Finished: SUCCESS